[ { "question": "Why is minimizing 2D SE important for SEGA?", "relevant_section_ids": [ "4.3" ], "relevant_context": [ "Under the federated framework described in Section 3.1, personalized global aggregation aims to provide clients with maximum external information by producing global models that can benefit individual clients more. The server needs an aggregation strategy that considers client heterogeneity and individual characteristics to maximize external knowledge for all clients. To achieve this objective, we construct a client graph Gclient based on clients’ similarity. By minimizing the two-dimensional Structural Entropy (2DSE) of Gclient, a graph capturing the internal similarities among clients is obtained, finalizing the Global Aggregation strategy for each client (SEGA).Gclient is an undirected, fully connected, weighted graph consisting of K nodes corresponding to K clients, with their similarities as edge weights. The similarity between client models can be estimated by providing them with the same input and measuring the similarity between their respective outputs. On this basis, the server first generates a random graph Grandom as input to all client models. With graph pooling, the server obtains different client models’ representations of the same graph." ], "final_answer": "Minimizing 2D structural entropy is important for global aggregation because it enables the creation of a client graph that captures the internal similarities among clients, which is used to finalize the Global Aggregation strategy, maximizing the external knowledge available to each client while considering client heterogeneity and individualcharacteristics.", "relevant_elements": [ "Minimizing 2D SE", "SEGA" ], "id": 4001, "masked_question": "Why is [mask1] important for [mask2]?", "masked_number": 2, "masked_elements": [ "Minimizing 2D SE", "SEGA" ], "figure_path": "./MISS-QA/figures/2409.00614v1_figure_1.png", "paperid": "2409.00614v1", "paper_path": "./MISS-QA/papers/2409.00614v1.json", "figure_id": "2409.00614v1_figure_1.png", "caption": "Figure 1. The overall framework of DAMe.", "qtype": "Design_Rationale" }, { "question": "What is the motivation behind combining the content of common and private stream in this framework?", "relevant_section_ids": [ "1.2" ], "relevant_context": [ "Under the federated framework described in Section 3.1, personalized global aggregation aims to provide clients with maximum external information by producing global models that can benefit individual clients more. The server needs an aggregation strategy that considers client heterogeneity and individual characteristics to maximize external knowledge for all clients. To achieve this objective, we construct a client graph Gclient based on clients’ similarity. By minimizing the two-dimensional Structural Entropy (2DSE) of Gclient, a graph capturing the internal similarities among clients is obtained, finalizing the Global Aggregation strategy for each client (SEGA).Gclient is an undirected, fully connected, weighted graph consisting of K nodes corresponding to K clients, with their similarities as edge weights. The similarity between client models can be estimated by providing them with the same input and measuring the similarity between their respective outputs. On this basis, the server first generates a random graph Grandom as input to all client models. With graph pooling, the server obtains different client models’ representations of the same graph" ], "final_answer": "Minimizing 2D structural entropy is important for global aggregation because it enables the creation of a client graph that captures the internal similarities among clients, which is used to finalize the Global Aggregation strategy, maximizing the external knowledge available to each client while considering client heterogeneity and individual characteristics.", "relevant_elements": [ "common and private stream" ], "id": 4002, "masked_question": "What is the motivation behind combining the content of [mask1] in this framework?", "masked_number": 1, "masked_elements": [ "common and private stream" ], "figure_path": "./MISS-QA/figures/2408.02872v1_figure_1.png", "paperid": "2408.02872v1", "paper_path": "./MISS-QA/papers/2408.02872v1.json", "figure_id": "2408.02872v1_figure_1.png", "caption": "Figure 1. System model of the proposed RSMA-based NOUM transmission.", "qtype": "Design_Rationale" }, { "question": "How does MACL achieve real subject similarity using multi-view data processing?", "relevant_section_ids": [ "3.3.2" ], "relevant_context": [ "The key idea of Multi-scale Appearance Similarity Contrastive Learning (MACL) is to ensure that the distance relationships between multiscale features are consistent with those of real subjects. This means the features of the same subject with different situations should be as close as possible (intra-consistency), while the distances between different samples’ features should match those between real subjects (inter-distinctiveness). As shown in Fig. 2(b)(right), we achieve intra-consistency by pulling positive samples of the reference subject closer, and inter-distinctiveness by introducing scaling factors to align the feature distances with negative samples to real subject distances. In this section, we will introduce the S+Space and MACL in the S+Space. As shown in Fig. 2(b) (right), We select frames different from the reference images as MACL positive samples. By aligning images of the same subject, CustomContrast effectively decouples irrelevant features of the subject. The processing details of positive samples are in Appendix B." ], "final_answer": "MACL achieves real subject similarity using multiview data by ensuring that the distance relationships between multiscale features are consistent with those of real subjects. This is done by maintaining intra-consistency, where features of the same subject with different situations are as close as possible, and inter-distinctiveness, where the distances between different samples' features match those between real subjects. MACL preserves the multi-scale similarity structure, ensuring that the similarities of learned features are positively correlated with those of real subjects.", "relevant_elements": [ "MACL", "multi-view data processing" ], "id": 4003, "masked_question": "How does [mask1] achieve real subject similarity using [mask2]?", "masked_number": 2, "masked_elements": [ "MACL", "multi-view data processing" ], "figure_path": "./MISS-QA/figures/2409.05606v2_figure_2.png", "paperid": "2409.05606v2", "paper_path": "./MISS-QA/papers/2409.05606v2.json", "figure_id": "2409.05606v2_figure_2.png", "caption": "Overview of the proposed CustomContrast. (a) Training pipeline. The consistency between textual and visual features is accurately learned by the MFI-Encoder, which includes a Textual-Visual (TV) Fusion module to enhance feature consistency from visual and textual Qformers. (b) The MCL paradigm includes CSCL, aligning high-level semantics by contrasting visual and textual embeddings via CLS tokens, and MACL, which is applied to text embeddings from different cross-attention layers. MACL decouples redundant subject features by aligning positive samples (segmented images of the same subject from various views, positions, and sizes), while preserving relative distances by contrasting with other subjects.", "qtype": "Implementation_Details" }, { "question": "What role does energy scores play in seen and unseen categorie estimation for online models?", "relevant_section_ids": [ "3.3" ], "relevant_context": [ "In Figure 3, we found that the two-stage split of the CGCD struggled to identify both known and unknown categories in our proposed online learning scenario. Online learning with batch-wise data led to severe forgetting in the split network, resulting in poor detection of unknown samples. In contrast, energy-based discovery showed better performance in novel category discovery without extra parameters compared to CGCD. Moreover, as it does not require an additional learning phase, energy-based discovery enables end-to-end training. Inspired by this observation, we propose an energy-guided discovery approach for novel category discovery. To the best of our knowledge, this is the first work to utilize the energy score for novel category discovery. The process in the second stage is the same as the first stage, where we split the unknown data into seen and unseen categories based on their energy scores. For the initial batch of incremental sessions, the online model is identical to the offline model. Additionally, since the initial batch data is the first data of the incremental session, any data classified as unknown is assumed to be unseen data. The proposed energy-guided discovery splits unlabeled data into known, seen, and unseen categories effectively. Unlike prior methods which require sufficient data, it can identify novel categories with batch-wise data." ], "final_answer": "Energy scores are used to classify unlabeled data into known and unknown categories in the first stage, and then further split unknown data into seen and unseen categories in the second stage. This is done by calculating the energy scores using a Gaussian Mixture Model to identify which cluster a sample belongs to, ultimately facilitating the estimation of seen and unseen data in the online model.", "relevant_elements": [ "energy scores", "seen and unseen categorie estimation" ], "id": 4004, "masked_question": "What role does [mask1] play in [mask2] estimation for online models?", "masked_number": 2, "masked_elements": [ "energy scores", "seen and unseen categorie estimation" ], "figure_path": "./MISS-QA/figures/2408.13492v1_figure_2.png", "paperid": "2408.13492v1", "paper_path": "./MISS-QA/papers/2408.13492v1.json", "figure_id": "2408.13492v1_figure_2.png", "caption": "Overall process of the proposed DEAN framework. The energy-guided discovery splits unlabeled data into known, seen, and unseen data for better novel category discovery, while variance-based feature augmentation enhances the clustering of unseen data. Lce facilitates better discriminative learning in the online continual learning.", "qtype": "Implementation_Details" }, { "question": "What is the relationship between Photonic Processing Unit and eDRAMs?", "relevant_section_ids": [ "3.3" ], "relevant_context": [ "The computation process of our R&B architecture contains three stages, as illustrated in Fig. 2(a). Initially, inputs are retrieved from eDRAMs, and corresponding weights are allocated to MRRs. Subsequently, following the PRM configuration, the weights are fixed and reused, allowing the inputs to pass through the MRRs to be optically weighted. The intermediate MVM results generated by the PPUs are then detected by BPDs, where they are converted into summed currents and digitized by ADCs. In the final stage, OBUs transform these outputs to generate the layer-wise results, which are then stored back in eDRAMs in preparation for the next computational layer.A critical aspect of this architecture is the role of the OBU during inference, mirroring its function during training and inference by executing essential shuffle and transpose operations. Subsequently, following the PRM configuration, the weights are fixed and reused. Along with PRM, these two technologies constitute the primary innovation of our R&B architecture. By leveraging one MRR array to represent multiple weight matrices, the architecture dramatically reduces the frequency of MRR writing operations, along with power consumption and latency, all while sustaining high performance." ], "final_answer": "The Photonic Processing Unit (PPU) plays a role in the R&B architecture's computation process where inputs are initially retrieved from eDRAMs to be processed by PPUs. After processing, the outputs are stored back into eDRAMs for the next layer computation.", "relevant_elements": [ "Photonic Processing Unit", "eDRAMs" ], "id": 4005, "masked_question": "What is the relationship between [mask1] and [mask2]?", "masked_number": 2, "masked_elements": [ "Photonic Processing Unit", "eDRAMs" ], "figure_path": "./MISS-QA/figures/2409.01836v3_figure_2.png", "paperid": "2409.01836v3", "paper_path": "./MISS-QA/papers/2409.01836v3.json", "figure_id": "2409.01836v3_figure_2.png", "caption": "(a) Overview of the R & B architecture. Each PPU contains a photonic MVM unit and a sampling and hold (S&H) unit. (b) Photonic Reuse Method (PRM). Block-wise reuse allows weight sharing among blocks (a block typically contains multiple layers). Layer-wise reuse enables weight sharing between individual layers. (c) Opto-electronic Blend Unit (OBU). OBUs handle shuffle operations via the peripheral circuit and perform transpose operations directly in the optical domain. (d) Computing pipeline of our R&B architecture.", "qtype": "Literature_Background" }, { "question": "How does the Verification Strategy ensure high-quality data output?", "relevant_section_ids": [ "2.1", "3.1.1" ], "relevant_context": [ "The first module in our framework is Quality Verification Agent, which ensures that the generated questions and answers meet a certain standard of quality. This component involves two main processes: \newline Verification Strategy:This includes additional heuristic strategies to judge which samples should be contained as high-quality data. This includes additional heuristic strategies to judge which samples should be contained as high-quality data. Specifically, we utilize two wide-used verification strategies:\newline • Scoring: We prompt LLMs to generate continuous scores, manually set a more reliable threshold score based on the validation set, and set those exceeding the threshold score as high-quality data.\newline • Classification: We prompt LLMs to generate binary classification and select those classified as high-quality data.Verification Condition:\newline Verification Condition:This involves setting specific conditions that both questions and answers must meet to be considered high-quality verification.The process includes:\newline • Criteria Perspectives: Criteria include relevance to the document, clarity, factual accuracy, logical coherence, and complexity of the question and answer.\newline • Auxiliary Context Information: We integrate additional contextual instructions to enhance the model’s accuracy and robustness, like guidelines. \newline • Auxiliary Generation Information: We enable the model to provide more reasoning rationale during output generation and observe whether this improves the robustness and accuracy of the verification process.", "Scoring is a Better Verification Strategy Compared with Classification. As shown in Figure 3 (a), the scoring strategy shows significantly higher kappa and precision scores compared to binary quality classification. This statistical improvement suggests that scoring better captures the nuances of human judgments. This observation aligns with findings in short-context scenarios (Fu et al., 2024a), reinforcing the generalizability of scoring strategies across different lengths of textual data." ], "final_answer": "The Verification Strategy ensures high-quality data output by employing scoring and classification strategies. Scoring involves prompting LLMs to generate continuous scores and setting a threshold to determine high-quality data. This strategy captures the nuances of human judgments better than binary classification, which simply classifies samples as high-quality or not. This process ensures consistency, precision, and alignment with human judgment, thus improving data quality.", "relevant_elements": [ "Verification Strategy", "high-quality data output" ], "id": 4007, "masked_question": "How does the [mask1] ensure [mask2]?", "masked_number": 2, "masked_elements": [ "Verification Strategy", "high-quality data output" ], "figure_path": "./MISS-QA/figures/2409.01893v1_figure_2.png", "paperid": "2409.01893v1", "paper_path": "./MISS-QA/papers/2409.01893v1.json", "figure_id": "2409.01893v1_figure_2.png", "caption": "The overall process of our Multi-agent Interactive Multi-hop Generation (MIMG) data synthesis framework.", "qtype": "Experimental_Results" }, { "question": "How does the reinforcement learning algorithm contribute to the updates of policy group?", "relevant_section_ids": [ "4", "4.1" ], "relevant_context": [ "In Section 3, we introduce the concept of environment agent to realize the adversarial policy search by combining logic rules with reinforcement learning. However, due to the black-box nature of data-driven methods, while adversarial actions can be generated, the difficulty of generating adversarial actions is difficult to quantify accurately, which limits the rationality of adversarial scenario generation. In this section, a data generation method based on scenarios with varying difficulty is presented. The method uses the performance of different stages in the policy search convergence process as a reference to quantify the adversarial intensity, thereby achieving a quantitative representation of scenario difficulty. The model parameters of the environment agent trained on different stages are updated and saved, and then output to the constructed policy group. The policy group is used to generate data that forms the basis for training the scenario difficulty quantitative representation model", "A reinforcement learning training process with stable convergence can be divided into two phases, i.e., the performance improvement phase and the convergence stabilization phase. In the performance improvement phase, the average return is still continuously increasing, which indicates that the policy search is still ongoing and the model parameters are still being updated to peruse better performance. In the convergence stabilization phase, however, the average return remains basically unchanged, indicating that the policy search is basically over, and the obtained policy is already the optimal policy that the current algorithm can achieve." ], "final_answer": "The reinforcement learning algorithm contributes to policy group updates by providing a systematic approach to search for optimal policies through the performance improvement phase, where model parameters are updated to pursue better performance, and the convergence stabilization phase, where the optimal policy is obtained. The performance at different stages is used to update and save model parameters to the constructed policy group.", "relevant_elements": [ "reinforcement learning algorithm", "policy group" ], "id": 4008, "masked_question": "How does the [mask1] contribute to the updates of [mask2]?", "masked_number": 2, "masked_elements": [ "reinforcement learning algorithm", "policy group" ], "figure_path": "./MISS-QA/figures/2408.14000v1_figure_1.png", "paperid": "2408.14000v1", "paper_path": "./MISS-QA/papers/2408.14000v1.json", "figure_id": "2408.14000v1_figure_1.png", "caption": "Overall architecture of data driven quantitative representation method of scenario difficulty for autonomous driving based on environment agent policy search.", "qtype": "Experimental_Results" }, { "question": "What are potential limitations of using Lipschitz optimization in neural subspace training?", "relevant_section_ids": [ "5.4" ], "relevant_context": [ "In this work, Lipschitz optimization is only applied to the elastic potential term of eq. 2. Since the nonlinear mapping is also involved in the inertia term, this may lower the convergence speed of the simulation involving dynamics. Considering that the inertia term is in quadratic form, the Hessian Lipschitz of the inertia term can be optimized by minimizing or bounding the Lipschitz constant of the network’s input-output Jacobian . This is a promising direction for future work to further accelerate the simulation with dynamics. Another limitation of our method is the extended training time introduced by incorporating Lipschitz optimization into the pipeline. As shown in Table 1, even with cubature acceleration, the training time is still increased by approximately five times compared to the conventional method. This issue can be addressed by employing fast approximate methods to estimate Lipschitz energy." ], "final_answer": "Potential limitations of using Lipschitz optimization in neural subspace training include the intractability of directly optimizing the Lipschitz constant due to the need to traverse all possible point pairs, sparse gradients that could damage Lipschitz characteristics in certain areas, increased memory usage, and potential memory shortages when training high-resolution meshes.", "relevant_elements": [ "Lipschitz optimization" ], "id": 4009, "masked_question": "What are potential limitations of using [mask1] in neural subspace training?", "masked_number": 1, "masked_elements": [ "Lipschitz optimization" ], "figure_path": "./MISS-QA/figures/2409.03807v1_figure_2.png", "paperid": "2409.03807v1", "paper_path": "./MISS-QA/papers/2409.03807v1.json", "figure_id": "2409.03807v1_figure_2.png", "caption": "Network training settings for effective neural subspace construction. (a) The supervised setting. (b) The unsupervised setting. Conventional methods only consider the loss shown in blue but do not optimize the Lipschitz loss (shown in orange) to control the landscape of simulation objective in the subspace.", "qtype": "Others" }, { "question": "What are the potential challenges of combining local SOP and global SOP in extracting meaningful image features?", "relevant_section_ids": [ "2.1" ], "relevant_context": [ "The state s is defined based on the ultrasound image. We have adopted an image quality classification network from our previous work , which used ResNet50 as a base network with multi-scale and higher-order processing of the image for conducting the holistic assessment of the image quality. The block diagram of this network is shown in Fig. 2. This classifier first extracts features at multiple scales to encode the inter-patient anatomical variations. Then, it uses second-order pooling (SoP) in the intermediate layers (local) and at the end of the network (global) to exploit the second-order statistical dependency of features. The local-to-global SoP will capture the higher-order relationships between different spatial locations and provide the seed for correlating local patches. This network encodes the image into a feature vector of size 2048, which represents the state of the policy." ], "final_answer": "Combining local and global second-order pooling (SoP) poses challenges such as increased computational complexity, potential feature redundancy, and the need for careful hyperparameter tuning. It demands substantial data to effectively handle multi-scale features while ensuring the model’s robustness. Additionally, balancing local and global information without conflicts can complicate optimization, particularly in real-time medical applications.", "relevant_elements": [ "local SOP", "global SOP" ], "id": 4010, "masked_question": "What are the potential challenges of combining [mask1] and [mask2] in extracting meaningful image features?", "masked_number": 2, "masked_elements": [ "local SOP", "global SOP" ], "figure_path": "./MISS-QA/figures/2409.02337v1_figure_2.png", "paperid": "2409.02337v1", "paper_path": "./MISS-QA/papers/2409.02337v1.json", "figure_id": "2409.02337v1_figure_2.png", "caption": "State space representation using a deep convolution neural network", "qtype": "Others" }, { "question": "How does tree attention mask interact with merged sequence?", "relevant_section_ids": [ "2.3" ], "relevant_context": [ "The traditional causal attention masks are designed for linear sequences, where each token attends to all previous tokens, restricting speculative decoding to verifying one sequence at a time. However, as the sequence lengthens during draft token generation, the number of potential continuations increases. For example, in the draft tree in Figure 2, the token following 'guest' could be 'speaker' or 'speak', while both 'at' and 'for' could follow 'speaker'. This creates a need to verify multiple draft sequences simultaneously. Tree attention modifies the attention mask to address this by compressing multiple sequences into a single merged sequence, such as ['guest', 'speaker', 'speak', 'at', 'for', 'ings'], while preserving a tree structure in the tree attention mask. Each child node attends only to its parent nodes, preventing sibling tokens from interfering with each other. After the LLM processes the merged sequence, all possible sequences such as 'guest speaker', 'guest speaker at', 'guest speaker for', and 'guest speak', along with their corresponding output tokens, are extracted based on the tree structure and verified in parallel." ], "final_answer": "The tree attention mask compresses multiple sequences into a single merged sequence while preserving a tree structure. Within this structure, each child node attends only to its parent nodes, preventing sibling tokens from interfering with each other. This allows the LLM to process and verify all possible sequences in parallel.", "relevant_elements": [ "tree attention mask", "merged sequence" ], "id": 4011, "masked_question": "How does [mask1] interact with [mask2]?", "masked_number": 2, "masked_elements": [ "tree attention mask", "merged sequence" ], "figure_path": "./MISS-QA/figures/2408.08696v1_figure_2.png", "paperid": "2408.08696v1", "paper_path": "./MISS-QA/papers/2408.08696v1.json", "figure_id": "2408.08696v1_figure_2.png", "caption": "An overview of Token Recycling. The adjacency matrix, initialized by inheriting from the previous query, stores candidate tokens. Token Recycling first retrieves a draft tree from the matrix based on the last token of the current content. The tree is then compressed into a merged sequence with a corresponding tree attention mask and sent to the LLM for a forward pass. After processing, all possible draft sequences are extracted and verified. The longest correct sequence is selected and added to the content. Finally, the top-k candidate tokens are used to update the matrix for the next iteration.", "qtype": "Literature_Background" }, { "question": "What are the benefits of using channel-wise concatenation in the processing of vision feature?", "relevant_section_ids": [ "2.4" ], "relevant_context": [ "We notice that existing popular fusion strategies, despite their variations in designs, can be broadly represented by the following several categories: (1) Sequence Append: directly appending the visual tokens from different backbones as a longer sequence; (2) Channel Concatenation: concatenating the visual tokens along the channel dimension without increasing the sequence length; (3) LLaVA-HR: injecting high-resolution features into low-resolution vision encoders using mixture-of-resolution adapter; (4) Mini-Gemini: using the CLIP tokens as the low resolution queries to cross-attend another high-resolution vision encoder in the co-located local windows. Although sequence append shows comparable performance to channel concatenation, it faces the challenge to handle more vision encoders due to the increasing sequence length. Hence, we choose direct channel concatenation as our fusion strategy considering its performance, expandability, and efficiency." ], "final_answer": "The benefits of using channel-wise concatenation in vision feature processing include achieving the best average performance, maintaining better throughput compared to sequence append, and offering performance, expandability, and efficiency.", "relevant_elements": [ "vision feature" ], "id": 4012, "masked_question": "What are the benefits of using channel-wise concatenation in the processing of [mask1]?", "masked_number": 1, "masked_elements": [ "vision feature" ], "figure_path": "./MISS-QA/figures/2408.15998v1_figure_2.png", "paperid": "2408.15998v1", "paper_path": "./MISS-QA/papers/2408.15998v1.json", "figure_id": "2408.15998v1_figure_2.png", "caption": "Overview of the Eagle exploration pipeline.", "qtype": "Design_Rationale" }, { "question": "How does the tailored zero-shot score contribute to the efficiency of neural architecture search?", "relevant_section_ids": [ "3.1" ], "relevant_context": [ "To enable a more accurate assessment of our hybrid networks, we integrate two selected zero-shot metrics. Given the significant difference in score magnitudes between these metrics, as shown in Figures 3(b) and 3(c), we focus on relative rankings rather than score magnitudes. Specifically, for a group of networks, the score of our tailored zero-shot metric for a specific network is determined by the relative ranking of its Zen-Score within the group. For instance, if a network exhibits the highest Zen-Score, its term yields a value of 1. The effectiveness of our tailored metric is validated through Table II and Figure 3, which demonstrate the highest Kendall-Tau Correlation. Additionally, this metric contributes to enhanced search efficiency due to the swift computational speed of both NN-Degree and Zen-Score. For example, assessing accuracy for an individual hybrid model from our supernet takes an average of several seconds, whereas computing our tailored zero-shot metric requires less time, making it over X times faster when tested on CIFAR100 and profiled on an NVIDIA GeForce RTX 2080Ti." ], "final_answer": "The tailored zero-shot score contributes to neural architecture search efficiency by enabling faster assessment due to its swift computational speed. The computation of the tailored zero-shot metric is significantly faster than assessing the accuracy of individual hybrid models derived from the supernet, leading to enhanced search efficiency.", "relevant_elements": [ "tailored zero-shot score", "neural architecture search" ], "id": 4013, "masked_question": "How does the [mask1] contribute to the efficiency of [mask2]?", "masked_number": 2, "masked_elements": [ "tailored zero-shot score", "neural architecture search" ], "figure_path": "./MISS-QA/figures/2409.04829v1_figure_2.png", "paperid": "2409.04829v1", "paper_path": "./MISS-QA/papers/2409.04829v1.json", "figure_id": "2409.04829v1_figure_2.png", "caption": "The overview of our NASH framework, where we integrate both the neural architecture search (NAS) and coarse-to-fine accelerator search to directly obtain optimal pairing of models and accelerators. Specifically, the NAS consists of a tailored zero-shot metric to pre-identify promising multiplication-reduce hybrid models before supernet training. Besides, the accelerator search involves a novel coarse-to-fine search strategy to expedite the accelerator search process.", "qtype": "Experimental_Results" }, { "question": "How does Recursive Token Merging interact with Self Attention module to enhance video consistency?", "relevant_section_ids": [ "3.3" ], "relevant_context": [ "TALO strategy perturbs each benign frame of video separately. This per-frame optimization makes the frames likely optimized along different adversarial directions resulting in motion discontinuity and temporal inconsistency. Furthermore, separately perturbing each benign frame reduces the monotonous gradients because the interactions among the frames are not exploited. To this end, we introduce a recursive token merging (ReToMe) strategy that recursively matches and merges similar tokens across frames together enabling the self-attention module to extract consistent features. In the following, we first provide the basic operation of token merging and token unmerging and then our recursive token merging algorithm.Token Merging (ToMe) is first applied to speed up diffusion models through several diffusion-specific improvements . Generally, tokens T are partitioned into a source (s⁢r⁢c) and destination (d⁢s⁢t) set. Then, tokens in s⁢r⁢c are matched to their most similar token in d⁢s⁢t, and r most similar edges are selected subsequently. Next, we merge the connected r most similar tokens in s⁢r⁢c to d⁢s⁢t by replacing them as the linked d⁢s⁢t tokens. To keep the token number unchanged, we divide merged tokens after self-attention by assigning their values to merged tokens in s⁢r⁢c.A self-attention module takes a sequence of input and output tokens across all frames. To partition tokens across frames into src and dst, we define stride as B. We randomly choose one out of the first B frames (e.g., the g-th frame), and select the subsequent frames every B interval into the dst set.Nevertheless, during the merging process expressed above, tokens in dst are not merged and compressed. To maximally fuse the inter-frame information, we recursively apply the above merging process to tokens in dst until they contain only one frame. Our ReToMe has three advantages. Firstly, ReToMe ensures that the most similar tokens share identical outputs, maximizing the compression of tokens. This approach fosters internal uniformity of features across frames and preserves temporal consistency, thereby effectively achieving temporal imperceptibility. Secondly, the merged tokens decrease interaction inside adversarial perturbations, effectively preventing overfitting on the surrogate model. Furthermore, the tokens linked to merged tokens facilitate inter-frame interaction in gradient calculation, which may induce more robust and diverse gradients. Therefore, ReToMe can effectively boost adversarial transferability." ], "final_answer": "Recursive Token Merging interacts with the Self Attention module by recursively matching and merging similar tokens across frames, enabling the Self Attention module to extract consistent features. This method ensures that the most similar tokens share identical outputs, which maximizes internal uniformity of features across frames and preserves temporal consistency, thereby enhancing video consistency.", "relevant_elements": [ "Recursive Token Merging", "Self Attention module" ], "id": 4014, "masked_question": "How does [mask1] interact with [mask2] to enhance video consistency?", "masked_number": 2, "masked_elements": [ "Recursive Token Merging", "Self Attention module" ], "figure_path": "./MISS-QA/figures/2408.05479v1_figure_2.png", "paperid": "2408.05479v1", "paper_path": "./MISS-QA/papers/2408.05479v1.json", "figure_id": "2408.05479v1_figure_2.png", "caption": "Framework overview of the proposed ReToMe-VA. For a video clip, DDIM inversion is applied to map the benign frames into the latent space. Timestep-wise Adversarial Latent Optimization is employed during the DDIM sampling process to optimize the latents. Throughout the whole pipeline, Recursive Token Merging and Recursive Token Unmerging Modules are integrated into the diffusion model to enhance its effectiveness. Additionally, structure loss is utilized to maintain the structural consistency of video frames. Ultimately, the resulting adversarial video clip is capable of deceiving the target model.", "qtype": "Literature_Background" }, { "question": "What is the importance of iterative parameter updating in retraining scheduling?", "relevant_section_ids": [ "2" ], "relevant_context": [ "It asynchronously reuses learned features from different subtasks and incorporates dynamic switching and incremental parameter updating to optimize the limited representation capacity of compressed mobile DNNs" ], "final_answer": "The importance of iterative parameter updating in retraining scheduling is to optimize the limited representation capacity of compressed mobile DNNs by incorporating dynamic switching and incremental parameter updating.", "relevant_elements": [ "iterative parameter updating", "retraining scheduling" ], "id": 4015, "masked_question": "What is the importance of [mask1] in [mask2]?", "masked_number": 2, "masked_elements": [ "iterative parameter updating", "retraining scheduling" ], "figure_path": "./MISS-QA/figures/2407.00016v1_figure_1.png", "paperid": "2407.00016v1", "paper_path": "./MISS-QA/papers/2407.00016v1.json", "figure_id": "2407.00016v1_figure_1.png", "caption": "Illustration of AdaBridge s system workflow.", "qtype": "Design_Rationale" }, { "question": "What impact does incorporating physical constraint loss have on the predictions of LSTM block?", "relevant_section_ids": [ "2.5" ], "relevant_context": [ "Energy conservation asserts that in a conservative system, the total energy remains constant over time. This concept is particularly relevant in systems where external energy exchanges are absent. To quantify alignment with energy conservation principles, we define an energy conservation loss function,ℒenergy, which measures the discrepancy between the energy states of the input and output fields. This function is integrated into the overall loss function to enhance the adherence of the model to energy conservation." ], "final_answer": "The importance of iterative parameter updating in retraining scheduling is to optimize the limited representation capacity of compressed mobile DNNs by incorporating dynamic switching and incremental parameter updating.", "relevant_elements": [ "physical constraint loss", "LSTM block" ], "id": 4016, "masked_question": "What impact does incorporating [mask1] have on the predictions of [mask2]?", "masked_number": 2, "masked_elements": [ "physical constraint loss", "LSTM block" ], "figure_path": "./MISS-QA/figures/2409.00458v1_figure_1.png", "paperid": "2409.00458v1", "paper_path": "./MISS-QA/papers/2409.00458v1.json", "figure_id": "2409.00458v1_figure_1.png", "caption": "Schematic representation of physics-constrained CED-LSTM model employing Voronoi tessellation for enhanced state field mapping from sparse observations.", "qtype": "Literature_Background" }, { "question": "What are the specific functions of the RPN and the ROIHead in the Detector", "relevant_section_ids": [ "2.2" ], "relevant_context": [ "Similarly, Meta R-CNN combines a two-stage detector and reweights RoI features in the detection head. Attention-RPN exploits matching relationship between the few-shot support set and query set with a contrastive training scheme, which can then be applied to detect novel objects without retraining and fine-tuning." ], "final_answer": "Unanswerable", "relevant_elements": [ "the RPN and the ROIHead", "Detector" ], "id": 4017, "masked_question": "What are the specific functions of the [mask1] in the [mask2]?", "masked_number": 2, "masked_elements": [ "the RPN and the ROIHead", "Detector" ], "figure_path": "./MISS-QA/figures/2408.05674v1_figure_2.png", "paperid": "2408.05674v1", "paper_path": "./MISS-QA/papers/2408.05674v1.json", "figure_id": "2408.05674v1_figure_2.png", "caption": "The overview of the proposed Prototype-based Soft-labels and Test-Time Learning (PS-TTL) framework for FSOD. Both the student and teacher networks are first initialized by the few-shot detector and then fine-tuned on test data. The teacher network takes test data as input to generate pseudo-labels, while the student model is trained using these pseudo-labels after post-processing with N-way K-shot data as supervision signals and updates the teacher net- work through EMA. A Prototype-based Soft-labels (PS) strategy is adopted to maintain class prototypes and compute the feature similarity between low-confidence pseudo-labels and class prototypes to replace them with soft-labels.", "qtype": "Implementation_Details" }, { "question": "How are EEG and adversarial example integrated into the model training process?", "relevant_section_ids": [ "3.4" ], "relevant_context": [ "Adversarial perturbations are image transformations capable of fooling ANNs while remaining imperceptible for humans. To assess the adversarial robustness of our models, we employed Foolbox to create adversarial versions of the 1654 original validation images under different attack strengths." ], "final_answer": "Unanswerable", "relevant_elements": [ "EEG", "adversarial example" ], "id": 4018, "masked_question": "How are [mask1] and [mask2] integrated into the model training process?", "masked_number": 2, "masked_elements": [ "EEG", "adversarial example" ], "figure_path": "./MISS-QA/figures/2409.03646v1_figure_1.png", "paperid": "2409.03646v1", "paper_path": "./MISS-QA/papers/2409.03646v1.json", "figure_id": "2409.03646v1_figure_1.png", "caption": "Paradigm for improving adversarial robustness via co-training with human EEG: We first trained dual-task learning (DTL) models with original and shuffled EEG data and then evaluated their robustness against various adversarial attacks. We trained four clusters of ResNet50 backbone models, each incorporating a different independent EEG predictor: Dense Layers (CNN), Recurrent Neural Networks (RNN), Transformer, and Attention layers. Finally, we measured the relationship between adversarial robustness gain and EEG prediction accuracy.", "qtype": "Literature_Background" }, { "question": "How are MLP and attention mechanism utilized to process utterance and description embeddings?", "relevant_section_ids": [ "3.4" ], "relevant_context": [ "This architecture is designed with a straightforward target that injects the personality information of each speaker into their corresponding utterances by a multi-layer perceptron network.Through this mechanism, all the utterances from the same speaker are shared in the unified speaker vector representation, while the weights are updated in the training process. Finally, the utterance vector is fused with the speaker vector which supports emotional classification.We consider a variant of our BiosERC model, which is engineered to dynamically incorporate the speaker’s information into each utterance via the attention mechanism. The relationship between the current utterance and all individual speakers is integrated to enrich the utterance vector representation." ], "final_answer": "In BiosERC, a multi-layer perceptron (MLP) network injects personality information of speakers into their corresponding utterances, creating a unified speaker vector representation. The attention mechanism dynamically incorporates speaker information into each utterance, modeling the relationship between the utterance and all speakers in a conversation to enrich the utterance vector representation.", "relevant_elements": [ "MLP", "attention mechanism" ], "id": 4019, "masked_question": "How are [mask1] and [mask2] utilized to process utterance and description embeddings?", "masked_number": 2, "masked_elements": [ "MLP", "attention mechanism" ], "figure_path": "./MISS-QA/figures/2407.04279v1_figure_2.png", "paperid": "2407.04279v1", "paper_path": "./MISS-QA/papers/2407.04279v1.json", "figure_id": "2407.04279v1_figure_2.png", "caption": "Overview of our BiosERC model architecture.", "qtype": "Literature_Background" }, { "question": "What role does FM play in shared decoder?", "relevant_section_ids": [ "3.4" ], "relevant_context": [ "Nevertheless, while multi-stage guidance proves beneficial in extracting valuable information from features at various levels, it is more challenging to maximize the mutual information between the conditional contrasts and the target MR contrast distributions. This is mainly due to intricate dependencies between multi-contrast imaging and finding more common and mutually adaptive feature representation.To overcome this challenge, we propose an adaptive feature maximize (FM) within the denoising network, unifying feature distributions as shown in Fig. 1(C).The distinction between local and global feature contrasts derived from the denoising and conditional feature distributions aids in adaptively assigning weights to more pertinent features. This adaptive weighting facilitates the selection of mutually dependent and highly effective shared representations within the latent distribution. Consequently, these representations can be leveraged to achieve more precise denoised target contrast." ], "final_answer": "The adaptive feature maximizer unifies feature distributions by utilizing encoded features from the Semantic Encoder and Diffusive Encoder, which undergo separate local and global feature extraction processes. It assigns weights based on feature relevance to facilitate the selection of mutually adaptive and effective shared representations, ultimately leading to more precise denoised target contrast.", "relevant_elements": [ "FM", "shared decoderm" ], "id": 4020, "masked_question": "What role does [mask1] play in [mask2]?", "masked_number": 2, "masked_elements": [ "FM", "shared decoderm" ], "figure_path": "./MISS-QA/figures/2409.00585v1_figure_1.png", "paperid": "2409.00585v1", "paper_path": "./MISS-QA/papers/2409.00585v1.json", "figure_id": "2409.00585v1_figure_1.png", "caption": "Network architecture of McCaD. A: Overall Architecture, B: Multi-scale Feature Guided Denoising Network to incorporate feature characteristics from conditional MRI contrasts at various stages to guide the reverse diffusion process, C: Adaptive Feature Maximizer, to weights more pertinent features within the latent space D: Feature Attentive Loss to improve the perceptual quality of the synthetic results.", "qtype": "Implementation_Details" }, { "question": "How does the self-attention module contribute to the global alignment loss based on the results?", "relevant_section_ids": [], "relevant_context": [], "final_answer": "Unanswerable", "relevant_elements": [ "Self-attention", "Global alignment loss" ], "id": 572, "masked_question": "How does the [mask1] module contribute to the global alignment loss based on the results?", "masked_number": 1, "masked_elements": [ "Self-attention" ], "figure_path": "./MISS-QA/figures/0_2411.00609v1_figure_1.png", "paperid": "2411.00609v1", "paper_path": "./MISS-QA/papers/2411.00609v1.json", "figure_id": "2411.00609v1_figure_1.png", "caption": "Figure 1: The Proposed MRI-Report Contrastive Learning Framework", "qtype": "Experimental_Results" }, { "question": "What motivates attention-based Modal Fusion for integrating diverse modal-specific representations?", "relevant_section_ids": [ "3.2.2" ], "relevant_context": [ "Specifically, for each entity, we measure the distinct importance of its modality information with attention mechanism, and employ the attention weights to integrate modal-specific feature variables (sampled from Eq. (6)) as follows:", "where α_m is the attention weight for modality m, taking the different nature of entities into consideration.", "In this way, we obtain modal-hybrid feature variables considering the distinct modality importance of the entity and leverage the IB-refined modal-specific feature variables." ], "final_answer": "Because different entities rely on their modalities to varying degrees, the model uses an attention mechanism to dynamically measure and weight each modality’s contribution. This attention-based fusion ensures that the modal-hybrid representation integrates modal-specific features in proportion to their importance for each entity.", "relevant_elements": [ "Modal Fusion", "Modal-specific Features" ], "id": 601, "masked_question": "What motivates attention-based [mask1] for integrating diverse modal-specific representations?", "masked_number": 1, "masked_elements": [ "Modal Fusion" ], "figure_path": "./MISS-QA/figures/0_2407.19302v1_figure_2.png", "paperid": "2407.19302v1", "paper_path": "./MISS-QA/papers/2407.19302v1.json", "figure_id": "2407.19302v1_figure_2.png", "caption": "Figure 2. The framework of the proposed IBMEA for the multi-modal entity alignment task.", "qtype": "Design_Rationale" }, { "question": "How does the Submodular function integrate Clue Score and Collaboration Score to rank sub-regions iteratively?", "relevant_section_ids": [ "3.2" ], "relevant_context": [ "Clue Score: An essential aspect of interpretability is enabling the object-level foundation model to accurately locate and identify objects while using fewer regions. … Then, the clue score of sub-region s is defined as: (Equation for f_clue).", "Collaboration Score: Some regions may exhibit strong combination effects, meaning they contribute effectively to model decisions only when paired with multiple specific sub-regions. … Therefore, we introduce the collaboration score f_collab to assess sub-regions with high sensitivity to decision outcomes.", "Submodular Function: The scores above are combined to construct a submodular function f, as follows:", "Saliency Map Generation: Using the above submodular function, a greedy search algorithm is applied to sort all sub-regions in V, yielding an ordered subset S. Introducing the submodular function enables the search algorithm to more precisely identify key visual regions for interpretation.", "Since the saliency map requires all sub-regions to be ranked, k can be set to |V| to compute ordered subsets. When the set function f satisfies the properties of diminishing returns and monotonic non-negative, a greedy search guarantees an approximate optimal solution [7]." ], "final_answer": "The method first computes two scores per super-pixel: a Clue Score (how much that region alone supports detecting the target) and a Collaboration Score (how much removal of that region degrades detection in combination with others). It then defines a single submodular set function f by combining these two scores over any subset of regions. Because f is monotonic and has diminishing returns, a standard greedy algorithm can be used: at each iteration the region whose addition yields the largest marginal increase in f is selected next. Repeating this until all regions are chosen produces a ranked list of sub-regions by importance.", "relevant_elements": [ "Submodular function", "Clue Score", "Colla. Score" ], "id": 604, "masked_question": "How does the [mask1] integrate [mask2] and Collaboration Score to rank sub-regions iteratively?", "masked_number": 2, "masked_elements": [ "Submodular function", "Clue Score" ], "figure_path": "./MISS-QA/figures/0_2411.16198v1_figure_2.png", "paperid": "2411.16198v1", "paper_path": "./MISS-QA/papers/2411.16198v1.json", "figure_id": "2411.16198v1_figure_2.png", "caption": "Figure 2: Framework of the proposed Visual Precision Search method for interpreting an object-level foundation model. The input is first sparsified into a set of sub-regions and then interpreted across different instances. A submodular function guides the search for significant sub-regions, updating the ordered subset iteratively, and ultimately generating the instance-level attribution map.", "qtype": "Implementation_Details" }, { "question": "How does the attribution score assessment compute marginal effects over the Ordered Subset to produce saliency weights?", "relevant_section_ids": [ "3.2" ], "relevant_context": [ "Saliency Map Generation: Using the above submodular function, a greedy search algorithm is applied to sort all sub-regions in V, yielding an ordered subset S. ... We evaluate the salient difference between the two sub-regions by the marginal effect.", "The attribution score α(s_i) for each sub-region s_i in S is assessed by: α(s_i) = f(S_i) − f(S_{i−1}) (+ β for the first sub-region), where β represents a baseline attribution score for the first sub-region, and f denotes the set function.", "When a new sub-region is added, a small marginal increase suggests comparable importance to the previous sub-region. A negative marginal effect indicates a counterproductive impact, which can be assessed by its absolute value. Finally, α is normalized to obtain the saliency map of the sub-region." ], "final_answer": "The attribution score for each sub-region is computed as the marginal gain in the submodular scoring function when adding that region to the set of previously selected regions—i.e., α(s_i)=f(S_i)−f(S_{i−1}), with a small baseline β added for the very first region. These per‐region marginal gains are then normalized across the ordered subset to yield the final saliency weights.", "relevant_elements": [ "Attribution score assessment", "Ordered Subset S" ], "id": 605, "masked_question": "How does the [mask1] compute marginal effects over the Ordered Subset to produce saliency weights?", "masked_number": 1, "masked_elements": [ "Attribution score assessment" ], "figure_path": "./MISS-QA/figures/1_2411.16198v1_figure_2.png", "paperid": "2411.16198v1", "paper_path": "./MISS-QA/papers/2411.16198v1.json", "figure_id": "2411.16198v1_figure_2.png", "caption": "Figure 2: Framework of the proposed Visual Precision Search method for interpreting an object-level foundation model. The input is first sparsified into a set of sub-regions and then interpreted across different instances. A submodular function guides the search for significant sub-regions, updating the ordered subset iteratively, and ultimately generating the instance-level attribution map.", "qtype": "Implementation_Details" }, { "question": "How does the submodular function integrate clue and collaboration scores during Visual Precision Search?", "relevant_section_ids": [ "3.2" ], "relevant_context": [ "Clue Score: An essential aspect of interpretability is enabling the object-level foundation model to accurately locate and identify objects while using fewer regions. … Given a subregion r, the object-level model outputs T instances … Then, the clue score of sub-region r is defined as: (definition of clue score).", "Collaboration Score: Some regions may exhibit strong combination effects, meaning they contribute effectively to model decisions only when paired with multiple specific sub-regions. Therefore, we introduce the collaboration score S_collab(r) to assess sub-regions with high sensitivity to decision outcomes: (definition of collaboration score).", "Submodular Function: The scores above are combined to construct a submodular function f(S), as follows:" ], "final_answer": "The submodular function f(S) is formed by directly combining (i.e. summing) the per-region clue scores and collaboration scores. In other words, for any candidate set of superpixel sub-regions S, f(S) aggregates each region’s clue score (measuring how strongly it supports correct detection) together with its collaboration score (measuring how critical it is in combination with other regions), yielding a single submodular objective that balances both individual and joint contributions.", "relevant_elements": [ "Submodular function", "Clue Score", "Collaboration Score" ], "id": 606, "masked_question": "How does the [mask1] integrate [mask2] and collaboration scores during Visual Precision Search?", "masked_number": 2, "masked_elements": [ "Submodular function", "Clue Score" ], "figure_path": "./MISS-QA/figures/2_2411.16198v1_figure_2.png", "paperid": "2411.16198v1", "paper_path": "./MISS-QA/papers/2411.16198v1.json", "figure_id": "2411.16198v1_figure_2.png", "caption": "Figure 2: Framework of the proposed Visual Precision Search method for interpreting an object-level foundation model. The input is first sparsified into a set of sub-regions and then interpreted across different instances. A submodular function guides the search for significant sub-regions, updating the ordered subset iteratively, and ultimately generating the instance-level attribution map.", "qtype": "Literature_Background" }, { "question": "How does sparse sub-region segmentation impact submodular function scoring in Visual Precision Search?", "relevant_section_ids": [ "3.2", "3.3" ], "relevant_context": [ "We apply the SLICO superpixel segmentation algorithm [1] to divide the input into n sub-regions, V.", "The quality of the search space is determined by sparse division, meaning that both the method of partitioning the input and the number of sub-regions play a crucial role in the faithfulness of the Visual Precision Search." ], "final_answer": "By breaking the image into a sparse set of super-pixel sub-regions, we define the atomic elements over which our submodular scoring function (combining clue and collaboration scores) operates. The choice of segmentation method and the number of sub-regions directly controls the search space: finer or semantically appropriate sparsification leads to more faithful and accurate region scores under the greedy submodular optimization, whereas poor or overly coarse segmentation degrades the quality and faithfulness of the resulting saliency map.", "relevant_elements": [ "Sparse Sub-region Set V", "Submodular function" ], "id": 607, "masked_question": "How does [mask1] segmentation impact submodular function scoring in Visual Precision Search?", "masked_number": 1, "masked_elements": [ "Sparse Sub-region Set V" ], "figure_path": "./MISS-QA/figures/3_2411.16198v1_figure_2.png", "paperid": "2411.16198v1", "paper_path": "./MISS-QA/papers/2411.16198v1.json", "figure_id": "2411.16198v1_figure_2.png", "caption": "Figure 2: Framework of the proposed Visual Precision Search method for interpreting an object-level foundation model. The input is first sparsified into a set of sub-regions and then interpreted across different instances. A submodular function guides the search for significant sub-regions, updating the ordered subset iteratively, and ultimately generating the instance-level attribution map.", "qtype": "Literature_Background" }, { "question": "How does conditioning on user-defined SCM impact denoising diffusion in the Semantic Conditional Module?", "relevant_section_ids": [ "4.2.1" ], "relevant_context": [ "In Semantic Conditional Module, the parameters θ_sem are composed of the object’s contact map parameters. We use a conditional generation model to infer probable contact maps ε_{θ_sem} based on user-specified or algorithmically predicted Semantic Contact Maps." ], "final_answer": "By feeding the user-defined Semantic Contact Map (SCM) into the diffusion model as a conditioning signal, each denoising step in the Semantic Conditional Module is guided to reconstruct contact-map samples that adhere to the user’s specified finger–object contact patterns. In other words, the SCM is concatenated as a condition at every noise level, steering the diffusion-based generator to output contact maps consistent with the fine-grained, user-defined semantics and thereby enabling controllable contact-map inference.", "relevant_elements": [ "Semantic Conditional Module", "SCM" ], "id": 612, "masked_question": "How does conditioning on user-defined [mask1] impact denoising diffusion in the [mask2]?", "masked_number": 2, "masked_elements": [ "SCM", "Semantic Conditional Module" ], "figure_path": "./MISS-QA/figures/0_2407.19370v1_figure_2.png", "paperid": "2407.19370v1", "paper_path": "./MISS-QA/papers/2407.19370v1.json", "figure_id": "2407.19370v1_figure_2.png", "caption": "Figure 2. Overview of ClickDiff: The model initially takes an object s point cloud as input and predicts the contact map conditioned on the Semantic Contact Map within the Semantic Conditional Module. Subsequently, the predicted contact map is fed into the Contact Conditional Module, where grasping is generated under the guidance of TGC and contact map.", "qtype": "Experimental_Results" }, { "question": "How does enforcing Tactile-Guided Constraint within the Contact Conditional Module refine grasp alignment?", "relevant_section_ids": [ "4.3", "5.3.3" ], "relevant_context": [ "The Tactile-Guided Constraint loss (L_TGC) specifically targets the vertices within the finger sets proximal to the object's surface, ensuring that fingers accurately align with the designated ground-truth contact areas by accurately indexing the point pairs in the SCM and calculating the distance between the centroid of each finger’s predefined set of points and the contact point on the object.", "Applying the Tactile-Guided Constraint effectively ensures that the fingers align with the designated ground-truth contact regions. Notably, the introduction of L_TGC results in a significant reduction in joint displacement and improvements in contact metrics, exemplified by a 6.11 mm decrease in Contact Deviation (CDev). Experiments demonstrate that our TGC constrains the contact position of fingers in the Contact Conditional Module, which solves the contact ambiguity problem well." ], "final_answer": "By adding the Tactile-Guided Constraint during Contact Conditional Module training, the model explicitly pulls finger vertices near the object’s surface toward the SCM-specified contact points. This is done by computing L2 distances between finger-centroids (from pre-weighted finger point sets) and their corresponding object contact points, which 1) resolves the ambiguity of ‘‘which part of the hand’’ should touch, 2) forces the fingertips to align with the true contact regions, and 3) yields a measurable drop in contact deviation (over 6 mm) and joint displacement.", "relevant_elements": [ "Contact Conditional Module", "Tactile-Guided Constraint" ], "id": 613, "masked_question": "How does enforcing [mask1] within the [mask2] refine grasp alignment?", "masked_number": 2, "masked_elements": [ "Tactile-Guided Constraint", "Contact Conditional Module" ], "figure_path": "./MISS-QA/figures/1_2407.19370v1_figure_2.png", "paperid": "2407.19370v1", "paper_path": "./MISS-QA/papers/2407.19370v1.json", "figure_id": "2407.19370v1_figure_2.png", "caption": "Figure 2. Overview of ClickDiff: The model initially takes an object s point cloud as input and predicts the contact map conditioned on the Semantic Contact Map within the Semantic Conditional Module. Subsequently, the predicted contact map is fed into the Contact Conditional Module, where grasping is generated under the guidance of TGC and contact map.", "qtype": "Experimental_Results" }, { "question": "What potential limitations arise when using user-specified Semantic Contact Map for diverse object geometries?", "relevant_section_ids": [], "relevant_context": [], "final_answer": "Unanswerable", "relevant_elements": [ "Semantic Contact Map" ], "id": 614, "masked_question": "What potential limitations arise when using user-specified [mask1] for diverse object geometries?", "masked_number": 1, "masked_elements": [ "Semantic Contact Map" ], "figure_path": "./MISS-QA/figures/2_2407.19370v1_figure_2.png", "paperid": "2407.19370v1", "paper_path": "./MISS-QA/papers/2407.19370v1.json", "figure_id": "2407.19370v1_figure_2.png", "caption": "Figure 2. Overview of ClickDiff: The model initially takes an object s point cloud as input and predicts the contact map conditioned on the Semantic Contact Map within the Semantic Conditional Module. Subsequently, the predicted contact map is fed into the Contact Conditional Module, where grasping is generated under the guidance of TGC and contact map.", "qtype": "Others" }, { "question": "What limitations arise from Hop Fuse’s reliance on content-aware dynamic sampling under sudden scene changes?", "relevant_section_ids": [], "relevant_context": [], "final_answer": "unanswerable", "relevant_elements": [ "Hop Fuse", "dynamic sampling" ], "id": 616, "masked_question": "What limitations arise from [mask1]’s reliance on content-aware [mask2] under sudden scene changes?", "masked_number": 2, "masked_elements": [ "Hop Fuse", "dynamic sampling" ], "figure_path": "./MISS-QA/figures/0_2411.00608v1_figure_2.png", "paperid": "2411.00608v1", "paper_path": "./MISS-QA/papers/2411.00608v1.json", "figure_id": "2411.00608v1_figure_2.png", "caption": "Figure 2: System overview of HopTrack. Hop Fuse associates active tracks with detections from dynamically sampled frames. Hop Update updates tracks’ positions and suppresses inaccurate tracks.", "qtype": "Others" }, { "question": "How might discretized dynamic matching in Hop Update struggle with varying object textures or illumination shifts?", "relevant_section_ids": [], "relevant_context": [], "final_answer": "Unanswerable", "relevant_elements": [ "Hop Update", "dynamic matching" ], "id": 617, "masked_question": "How might discretized [mask1] in Hop Update struggle with varying object textures or illumination shifts?", "masked_number": 1, "masked_elements": [ "dynamic matching" ], "figure_path": "./MISS-QA/figures/1_2411.00608v1_figure_2.png", "paperid": "2411.00608v1", "paper_path": "./MISS-QA/papers/2411.00608v1.json", "figure_id": "2411.00608v1_figure_2.png", "caption": "Figure 2: System overview of HopTrack. Hop Fuse associates active tracks with detections from dynamically sampled frames. Hop Update updates tracks’ positions and suppresses inaccurate tracks.", "qtype": "Others" }, { "question": "What drives using both static matching and dynamic matching for identity association?", "relevant_section_ids": [ "3.4", "3.5" ], "relevant_context": [ "Section 3.4: \"The discretized static and dynamic matching is meant to use appearance features that can be extracted efficiently with the CPU, in order to associate objects with large inter-frame displacement across multiple frames and to suppress inaccurate tracks.\"", "Section 3.5: \"The issue with static matching is that during the Hop Update phase, depending on the accuracy of the Kalman filter, the tracked objects might not be in the center of the bounding box or the bounding box might not be tight. Therefore, we propose a lightweight, dynamic discretized matching method to be run on each hopping frame, to check if the bounding boxes are accurately tracking the objects, and suppress tracks when occlusion happens.\"" ], "final_answer": "Static matching is used at detection frames to efficiently associate objects that have large inter-frame displacements using CPU-friendly appearance cues, while dynamic matching is run on each hopping (non-detection) frame to correct for bounding‐box misalignments and occlusions when the Kalman filter’s predictions become inaccurate.", "relevant_elements": [ "Static matching", "Dynamic matching", "Association" ], "id": 618, "masked_question": "What drives using both [mask1] and [mask2] for identity association?", "masked_number": 2, "masked_elements": [ "Static matching", "Dynamic matching" ], "figure_path": "./MISS-QA/figures/2_2411.00608v1_figure_2.png", "paperid": "2411.00608v1", "paper_path": "./MISS-QA/papers/2411.00608v1.json", "figure_id": "2411.00608v1_figure_2.png", "caption": "Figure 2: System overview of HopTrack. Hop Fuse associates active tracks with detections from dynamically sampled frames. Hop Update updates tracks’ positions and suppresses inaccurate tracks.", "qtype": "Design_Rationale" }, { "question": "What motivates introducing community-level hetero-meta-path alongside node-level hetero-meta-path for dual-modal integration?", "relevant_section_ids": [ "4.1.2" ], "relevant_context": [ "In constructing node-level hetero-meta-path, we measure the similarity of connection patterns of cross-modal node pairs as the strength of their structure-function coupling.", "As for community-level hetero-meta-path, we suggest that brain regions with cooperative interactions may form a closed induced subgraph in both Gf and Gd." ], "final_answer": "While node-level hetero-meta-paths capture pairwise structure–function coupling between individual regions, community-level hetero-meta-paths are introduced to model higher-order, cooperative interactions among sets of brain regions that form closed subgraphs in both functional and structural networks.", "relevant_elements": [ "node-level hetero-meta-path", "community-level hetero-meta-path" ], "id": 620, "masked_question": "What motivates introducing [mask1] alongside node-level hetero-meta-path for dual-modal integration?", "masked_number": 1, "masked_elements": [ "community-level hetero-meta-path" ], "figure_path": "./MISS-QA/figures/0_2411.08424v1_figure_1.png", "paperid": "2411.08424v1", "paper_path": "./MISS-QA/papers/2411.08424v1.json", "figure_id": "2411.08424v1_figure_1.png", "caption": "Figure 1: Overview of our proposed method. a) We extract node features, Φ1subscriptΦ1\\Phi_{1}roman_Φ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and Φ2subscriptΦ2\\Phi_{2}roman_Φ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT from each modality to establish Gf={𝒩f,Φ1}subscript𝐺𝑓subscript𝒩𝑓subscriptΦ1G_{f}=\\left\\{\\mathcal{N}_{f},\\Phi_{1}\\right\\}italic_G start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT = { caligraphic_N start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT , roman_Φ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT }, Gd={𝒩d,Φ2}subscript𝐺𝑑subscript𝒩𝑑subscriptΦ2G_{d}=\\left\\{\\mathcal{N}_{d},\\Phi_{2}\\right\\}italic_G start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT = { caligraphic_N start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT , roman_Φ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT }. b) Node-level and community-level hetero-meta-paths are combined as meta-path Φ3:𝒩f→𝒩d:subscriptΦ3→subscript𝒩𝑓subscript𝒩𝑑\\Phi_{3}:\\mathcal{N}_{f}\\rightarrow\\mathcal{N}_{d}roman_Φ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT : caligraphic_N start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT → caligraphic_N start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT, and Φ4subscriptΦ4\\Phi_{4}roman_Φ start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT is a reversal of Φ3subscriptΦ3\\Phi_{3}roman_Φ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT. The subject-level HG is denoted as GH={(𝒩f,𝒩d),(Φ1,Φ2,Φ3,Φ4)}subscript𝐺𝐻subscript𝒩𝑓subscript𝒩𝑑subscriptΦ1subscriptΦ2subscriptΦ3subscriptΦ4G_{H}=\\left\\{\\left(\\mathcal{N}_{f},\\mathcal{N}_{d}\\right),\\left(\\Phi_{1},\\Phi_%\n{2},\\Phi_{3},\\Phi_{4}\\right)\\right\\}italic_G start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT = { ( caligraphic_N start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT , caligraphic_N start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT ) , ( roman_Φ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , roman_Φ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , roman_Φ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT , roman_Φ start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT ) },. c) We preserve Φ2subscriptΦ2\\Phi_{2}roman_Φ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and dynamically reconstruct FC to obtain Φ^1subscript^Φ1\\hat{\\Phi}_{1}over^ start_ARG roman_Φ end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, then update Φ3subscriptΦ3\\Phi_{3}roman_Φ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT and Φ4subscriptΦ4\\Phi_{4}roman_Φ start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT to generate augmented G^Hsubscript^𝐺𝐻\\hat{G}_{H}over^ start_ARG italic_G end_ARG start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT. d) Both GHsubscript𝐺𝐻G_{H}italic_G start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT and G^Hsubscript^𝐺𝐻\\hat{G}_{H}over^ start_ARG italic_G end_ARG start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT are fed into backbone consisted of HAN, HG pooling and readout layers to extract dual-modal features.", "qtype": "Design_Rationale" }, { "question": "How does structural constraint preserve Φ2 during augmented GH generation?", "relevant_section_ids": [ "4", "4.2" ], "relevant_context": [ "In present work, we propose a novel HGNN to fuse dual-modal information. We define meta-paths in the fused as Φ1, Φ2, Φ3, and Φ4, where homo-meta-paths Φ1, Φ2 are edges of FC or SC, and hetero-meta-paths Φ3, Φ4 are edges between FC and SC.", "The abundant heterogeneity of the HG provides ample possibilities from the perspective of construction, which provides convenience for augmentation. Therefore, we propose to dynamically reconstruct FC to obtain , then , will naturally update along with . While is fixed as structural constraint to maintain the semantic consistency of HGs before and after augmentation.", "Finally, we consider edges in as corresponding to Φ1. With fixed, we can update and following (5)–(7). Then the augmented can be constructed following (8)–(9). We sent and in pair into the backbone to avoid data leakage." ], "final_answer": "During augmentation only the functional‐connectivity meta-path Φ1 is re-estimated from sliding-window correlation, while the structural‐connectivity meta-path Φ2 is held fixed as a ‘‘structural constraint.’’ In other words, the adjacency matrix corresponding to Φ2 (SC) is not changed during augmentation, preserving Φ2 in the augmented heterogeneous graph.", "relevant_elements": [ "structural constraint", "Φ2", "augmented GH" ], "id": 622, "masked_question": "How does [mask1] preserve [mask2] during augmented GH generation?", "masked_number": 2, "masked_elements": [ "structural constraint", "Φ2" ], "figure_path": "./MISS-QA/figures/1_2411.08424v1_figure_1.png", "paperid": "2411.08424v1", "paper_path": "./MISS-QA/papers/2411.08424v1.json", "figure_id": "2411.08424v1_figure_1.png", "caption": "Figure 1: Overview of our proposed method. a) We extract node features, Φ1subscriptΦ1\\Phi_{1}roman_Φ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and Φ2subscriptΦ2\\Phi_{2}roman_Φ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT from each modality to establish Gf={𝒩f,Φ1}subscript𝐺𝑓subscript𝒩𝑓subscriptΦ1G_{f}=\\left\\{\\mathcal{N}_{f},\\Phi_{1}\\right\\}italic_G start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT = { caligraphic_N start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT , roman_Φ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT }, Gd={𝒩d,Φ2}subscript𝐺𝑑subscript𝒩𝑑subscriptΦ2G_{d}=\\left\\{\\mathcal{N}_{d},\\Phi_{2}\\right\\}italic_G start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT = { caligraphic_N start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT , roman_Φ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT }. b) Node-level and community-level hetero-meta-paths are combined as meta-path Φ3:𝒩f→𝒩d:subscriptΦ3→subscript𝒩𝑓subscript𝒩𝑑\\Phi_{3}:\\mathcal{N}_{f}\\rightarrow\\mathcal{N}_{d}roman_Φ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT : caligraphic_N start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT → caligraphic_N start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT, and Φ4subscriptΦ4\\Phi_{4}roman_Φ start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT is a reversal of Φ3subscriptΦ3\\Phi_{3}roman_Φ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT. The subject-level HG is denoted as GH={(𝒩f,𝒩d),(Φ1,Φ2,Φ3,Φ4)}subscript𝐺𝐻subscript𝒩𝑓subscript𝒩𝑑subscriptΦ1subscriptΦ2subscriptΦ3subscriptΦ4G_{H}=\\left\\{\\left(\\mathcal{N}_{f},\\mathcal{N}_{d}\\right),\\left(\\Phi_{1},\\Phi_%\n{2},\\Phi_{3},\\Phi_{4}\\right)\\right\\}italic_G start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT = { ( caligraphic_N start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT , caligraphic_N start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT ) , ( roman_Φ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , roman_Φ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , roman_Φ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT , roman_Φ start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT ) },. c) We preserve Φ2subscriptΦ2\\Phi_{2}roman_Φ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and dynamically reconstruct FC to obtain Φ^1subscript^Φ1\\hat{\\Phi}_{1}over^ start_ARG roman_Φ end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, then update Φ3subscriptΦ3\\Phi_{3}roman_Φ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT and Φ4subscriptΦ4\\Phi_{4}roman_Φ start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT to generate augmented G^Hsubscript^𝐺𝐻\\hat{G}_{H}over^ start_ARG italic_G end_ARG start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT. d) Both GHsubscript𝐺𝐻G_{H}italic_G start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT and G^Hsubscript^𝐺𝐻\\hat{G}_{H}over^ start_ARG italic_G end_ARG start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT are fed into backbone consisted of HAN, HG pooling and readout layers to extract dual-modal features.", "qtype": "Implementation_Details" }, { "question": "How does instance normalization in latent map fusion remove style while preserving source content?", "relevant_section_ids": [ "3.2.2" ], "relevant_context": [ "In the top branch, an instance normalization (IN) layer standardizes Z_x across spatial dimensions using channel-wise mean and variance, producing Z^c_x. Previous studies show that channel-wise statistics in latent feature maps can encapsulate the style of images [40,41,42,43]. By standardizing each feature channel to zero mean and unit variance, the IN layer removes instance-specific style from an image while retaining essential content features in Z^c_x [44]." ], "final_answer": "Instance normalization (IN) computes the mean and variance of each latent feature channel across spatial locations and re-scales the channel to zero mean and unit variance. Because style information is largely encoded in these channel-wise statistics, IN effectively removes the original style, while the remaining normalized spatial patterns continue to carry the anatomical (content) information of the source MRI.", "relevant_elements": [ "Instance Normalization", "Latent Map Fusion" ], "id": 624, "masked_question": "How does [mask1] in [mask2] remove style while preserving source content?", "masked_number": 2, "masked_elements": [ "Instance Normalization", "Latent Map Fusion" ], "figure_path": "./MISS-QA/figures/0_2408.09315v1_figure_1.png", "paperid": "2408.09315v1", "paper_path": "./MISS-QA/papers/2408.09315v1.json", "figure_id": "2408.09315v1_figure_1.png", "caption": "Figure 1: Illustration of the proposed\nHCLD framework.\nDuring training, it extracts latent feature maps from source and target MRIs using an encoder 𝑬𝑬\\bm{E}bold_italic_E, fuses latent representations, and trains a conditional latent diffusion model (cLDM) to estimate the translated latent maps.\nDuring inference, it applies the trained cLDM to generate the final translated latent map by iterative denoising Tssubscript𝑇𝑠T_{s}italic_T start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT steps and then utilizes a decoder 𝑫𝑫\\bm{D}bold_italic_D to reconstruct the translated MRI.\nBoth 𝑬𝑬\\bm{E}bold_italic_E and 𝑫𝑫\\bm{D}bold_italic_D are derived from an autoencoder pre-trained on 3,500 T1-weighted brain\nMRIs.", "qtype": "Implementation_Details" }, { "question": "How are timestep encodings integrated into ResBlock and AttnBlock within the cLDM?", "relevant_section_ids": [], "relevant_context": [], "final_answer": "Unanswerable", "relevant_elements": [ "ResBlock", "AttnBlock", "cLDM" ], "id": 625, "masked_question": "How are timestep encodings integrated into [mask1] and AttnBlock within the cLDM?", "masked_number": 1, "masked_elements": [ "ResBlock" ], "figure_path": "./MISS-QA/figures/1_2408.09315v1_figure_1.png", "paperid": "2408.09315v1", "paper_path": "./MISS-QA/papers/2408.09315v1.json", "figure_id": "2408.09315v1_figure_1.png", "caption": "Figure 1: Illustration of the proposed\nHCLD framework.\nDuring training, it extracts latent feature maps from source and target MRIs using an encoder 𝑬𝑬\\bm{E}bold_italic_E, fuses latent representations, and trains a conditional latent diffusion model (cLDM) to estimate the translated latent maps.\nDuring inference, it applies the trained cLDM to generate the final translated latent map by iterative denoising Tssubscript𝑇𝑠T_{s}italic_T start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT steps and then utilizes a decoder 𝑫𝑫\\bm{D}bold_italic_D to reconstruct the translated MRI.\nBoth 𝑬𝑬\\bm{E}bold_italic_E and 𝑫𝑫\\bm{D}bold_italic_D are derived from an autoencoder pre-trained on 3,500 T1-weighted brain\nMRIs.", "qtype": "Implementation_Details" }, { "question": "How does latent map fusion's IN branch repurpose instance normalization to differentiate content from style?", "relevant_section_ids": [ "3.2.2" ], "relevant_context": [ "In the top branch, an instance normalization (IN) layer standardizes z_x across spatial dimensions using channel-wise mean and variance, producing z^c_x.", "Previous studies show that channel-wise statistics in latent feature maps can encapsulate the style of images [40, 41, 42, 43]. By standardizing each feature channel to zero mean and unit variance, the IN layer removes instance-specific style from an image while retaining essential content features in z^c_x." ], "final_answer": "The IN branch applies instance normalization to the source latent map by normalizing each channel to zero mean and unit variance. Since channel-wise mean and variance encode style, this procedure strips away instance-specific style information, yielding a latent map that retains content features while discarding style.", "relevant_elements": [ "Latent Map Fusion", "IN" ], "id": 626, "masked_question": "How does [mask1]'s [mask2] branch repurpose instance normalization to differentiate content from style?", "masked_number": 2, "masked_elements": [ "Latent Map Fusion", "IN" ], "figure_path": "./MISS-QA/figures/2_2408.09315v1_figure_1.png", "paperid": "2408.09315v1", "paper_path": "./MISS-QA/papers/2408.09315v1.json", "figure_id": "2408.09315v1_figure_1.png", "caption": "Figure 1: Illustration of the proposed\nHCLD framework.\nDuring training, it extracts latent feature maps from source and target MRIs using an encoder 𝑬𝑬\\bm{E}bold_italic_E, fuses latent representations, and trains a conditional latent diffusion model (cLDM) to estimate the translated latent maps.\nDuring inference, it applies the trained cLDM to generate the final translated latent map by iterative denoising Tssubscript𝑇𝑠T_{s}italic_T start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT steps and then utilizes a decoder 𝑫𝑫\\bm{D}bold_italic_D to reconstruct the translated MRI.\nBoth 𝑬𝑬\\bm{E}bold_italic_E and 𝑫𝑫\\bm{D}bold_italic_D are derived from an autoencoder pre-trained on 3,500 T1-weighted brain\nMRIs.", "qtype": "Literature_Background" }, { "question": "How does pre-trained encoder reuse enable cLDM's efficient two-stage latent diffusion optimization?", "relevant_section_ids": [ "3.2", "3.4" ], "relevant_context": [ "Notably, only the cLDM undergoes updates during the training stage.", "In the second training stage, the pre-trained autoencoder networks E and D are reused with their network parameters frozen. Only the cLDM is updated to reconstruct the translated source latent map with the target domain style, which is computationally efficient as it operates in low-dimensional latent space." ], "final_answer": "By first training a 3D autoencoder (encoder E and decoder D) to compress MRIs into a low-dimensional latent space, then freezing its weights in the second stage, the framework only needs to update the conditional latent diffusion model (cLDM). This two-stage training—autoencoder pre-training followed by cLDM optimization in latent space—dramatically reduces computational cost (because diffusion operates in low-dimensional space), avoids the instability of jointly updating multiple networks, and improves generalizability on unseen data.", "relevant_elements": [ "Encoder E", "Conditional Latent Diffusion Model (cLDM)" ], "id": 627, "masked_question": "How does pre-trained [mask1] reuse enable cLDM's efficient two-stage latent diffusion optimization?", "masked_number": 1, "masked_elements": [ "Encoder E" ], "figure_path": "./MISS-QA/figures/3_2408.09315v1_figure_1.png", "paperid": "2408.09315v1", "paper_path": "./MISS-QA/papers/2408.09315v1.json", "figure_id": "2408.09315v1_figure_1.png", "caption": "Figure 1: Illustration of the proposed\nHCLD framework.\nDuring training, it extracts latent feature maps from source and target MRIs using an encoder 𝑬𝑬\\bm{E}bold_italic_E, fuses latent representations, and trains a conditional latent diffusion model (cLDM) to estimate the translated latent maps.\nDuring inference, it applies the trained cLDM to generate the final translated latent map by iterative denoising Tssubscript𝑇𝑠T_{s}italic_T start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT steps and then utilizes a decoder 𝑫𝑫\\bm{D}bold_italic_D to reconstruct the translated MRI.\nBoth 𝑬𝑬\\bm{E}bold_italic_E and 𝑫𝑫\\bm{D}bold_italic_D are derived from an autoencoder pre-trained on 3,500 T1-weighted brain\nMRIs.", "qtype": "Literature_Background" }, { "question": "How does feature extraction inform multi-relational text graph construction differently than single-view construction?", "relevant_section_ids": [ "1", "3.1", "3.2" ], "relevant_context": [ "Existing methods treat words and documents as nodes and construct a heterogeneous text graph based on the point-wise mutual information (PMI) relationships between words and the TF-IDF relationships between words and documents. Despite such methods having achieved promising results, they neglect the rich and deep semantics, which is pivotal for capturing the core intent of the text. (Section 1)", "To forge links between texts that are otherwise unconnected, we extract various core features: titles, keywords, and events. Each of these is embedded via a pre-trained encoder to yield vector representations that will later define semantic relations. (Section 3.1)", "Rather than relying on a single, undifferentiated graph, we calculate the semantic similarity between the extracted features to construct multiple semantic relationships between document nodes, corresponding to title relationships, keyword relationships, and event relationships. Based on the rich features inherent in the text, the constructed text graph can maximize the connections between similar documents. (Section 3.2)" ], "final_answer": "Traditional single-view graph construction builds one graph—typically using PMI for word–word edges and TF-IDF for word–document edges—thus ignoring deeper semantics. In contrast, ConNHS’s feature extraction first pulls out titles, keywords, and events and embeds each via a pre-trained encoder. Then, in multi-relational graph construction, these distinct features are used to compute separate similarity scores, producing three parallel subgraphs (title-based, keyword-based, event-based). This multi-view approach captures richer semantic connections than a single undifferentiated graph.", "relevant_elements": [ "Feature extraction", "Multi-relational text graph construction" ], "id": 628, "masked_question": "How does [mask1] inform [mask2] differently than single-view construction?", "masked_number": 2, "masked_elements": [ "Feature extraction", "Multi-relational text graph construction" ], "figure_path": "./MISS-QA/figures/0_2411.16787v1_figure_1.png", "paperid": "2411.16787v1", "paper_path": "./MISS-QA/papers/2411.16787v1.json", "figure_id": "2411.16787v1_figure_1.png", "caption": "Figure 1: Flow chart of the proposed ConNHS. Initially, we construct a multi-relational text graph by leveraging inherent core features (titles, keywords, events) to establish semantic connections among texts while encoding textual content as initial node representations. Subsequently, relational separation yields distinct subgraphs, upon which intra-graph and inter-graph propagation are performed to obtain contrastive samples and similarity score matrix. During Contrastive learning with NHS, negative selection is optimized to encourage more explicit cluster boundaries (minimizing intra-class distances while maximizing inter-class distances; distinct colors indicate different clusters). Ultimately, predicted labels are assigned to document nodes via a logical classifier.", "qtype": "Literature_Background" }, { "question": "How does inter-graph propagation improve upon equal-weight fusion in earlier multi-graph frameworks?", "relevant_section_ids": [ "1", "3.3" ], "relevant_context": [ "Secondly, they assign equal weights to different features during the inter-graph propagation, ignoring the intrinsic differences inherent in these features.", "After intra-graph propagation, each document node learns unique feature information under different semantic relationships. Therefore, we design a cross-graph attention network to coordinate and integrate diverse feature information." ], "final_answer": "Inter-graph propagation improves upon equal-weight fusion by introducing a cross-graph attention network (CGAN) that learns attention weights for each semantic subgraph’s node representations, rather than averaging them equally. This attention mechanism harmonizes and coordinates diverse feature information across graphs, capturing their intrinsic differences and leading to more nuanced fused representations.", "relevant_elements": [ "Inter-Graph propagation" ], "id": 629, "masked_question": "How does [mask1] improve upon equal-weight fusion in earlier multi-graph frameworks?", "masked_number": 1, "masked_elements": [ "Inter-Graph propagation" ], "figure_path": "./MISS-QA/figures/1_2411.16787v1_figure_1.png", "paperid": "2411.16787v1", "paper_path": "./MISS-QA/papers/2411.16787v1.json", "figure_id": "2411.16787v1_figure_1.png", "caption": "Figure 1: Flow chart of the proposed ConNHS. Initially, we construct a multi-relational text graph by leveraging inherent core features (titles, keywords, events) to establish semantic connections among texts while encoding textual content as initial node representations. Subsequently, relational separation yields distinct subgraphs, upon which intra-graph and inter-graph propagation are performed to obtain contrastive samples and similarity score matrix. During Contrastive learning with NHS, negative selection is optimized to encourage more explicit cluster boundaries (minimizing intra-class distances while maximizing inter-class distances; distinct colors indicate different clusters). Ultimately, predicted labels are assigned to document nodes via a logical classifier.", "qtype": "Literature_Background" }, { "question": "How does regressing post-D rewards on binary features quantify feature imprint methodology?", "relevant_section_ids": [ "2.1" ], "relevant_context": [ "We can now quantify the extent to which target and spoiler features imprint on the RMs by regressing rewards (or reward shifts) against the boolean feature indicators:", "… The coefficient β_j estimates the point increase in reward between an entry t_i (or t_i′) containing feature j compared to an entry without it, holding all other features constant. We refer to this as the post-D imprint for value j." ], "final_answer": "By performing a linear regression of the post-D reward scores on binary feature indicators, the method assigns each feature j a coefficient β_j. This coefficient directly measures the point increase in the reward model’s score when that feature is present (versus absent), thereby quantifying the strength of the feature’s imprint on the trained reward model.", "relevant_elements": [ "post-D reward vectors", "feature imprint" ], "id": 632, "masked_question": "How does regressing [mask1] on binary features quantify feature imprint methodology?", "masked_number": 1, "masked_elements": [ "post-D reward vectors" ], "figure_path": "./MISS-QA/figures/0_2408.10270v1_figure_1.png", "paperid": "2408.10270v1", "paper_path": "./MISS-QA/papers/2408.10270v1.json", "figure_id": "2408.10270v1_figure_1.png", "caption": "Figure 1: Summary of the paper s background, setup and contributions. [1] AI Alignment Pipeline: This section illustrates the sequence of events during RLHF, highlighting the interactions between the alignment dataset, human preferences, the RM and the base-model being aligned. [2] Alignment Dataset Taxonomization: The alignment dataset 𝒟𝒟\\mathcal{D}caligraphic_D comprises pairs of text (tic,tirsuperscriptsubscript𝑡𝑖𝑐superscriptsubscript𝑡𝑖𝑟t_{i}^{c},t_{i}^{r}italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT , italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT) where ticsuperscriptsubscript𝑡𝑖𝑐t_{i}^{c}italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT is preferred by the human over tirsuperscriptsubscript𝑡𝑖𝑟t_{i}^{r}italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT presumably because it is more aligned with a set of defined target values. (Top) The alignment dataset is featurized using an LM-labeler based on a set of target features (intended for alignment, in black) and spoiler features (learned inadvertently, in grey). (Bottom) The alignment dataset is rewritten and re-featurized accordingly. [3] Reward Models (RMs): (Top) An RM maps a user input-model output pair t𝑡titalic_t to a score r (t).𝑟𝑡r(t).italic_r ( italic_t ) . We compare the RM before (pre-𝒟𝒟\\mathcal{D}caligraphic_D model ℛ¯¯ℛ\\underline{\\mathcal{R}}under¯ start_ARG caligraphic_R end_ARG) and after (post-𝒟𝒟\\mathcal{D}caligraphic_D model ℛℛ\\mathcal{R}caligraphic_R) it is trained on the alignment dataset. (Bottom) The pair of rewards awarded by ℛℛ\\mathcal{R}caligraphic_R (r (tic),r (tir))𝑟superscriptsubscript𝑡𝑖𝑐𝑟superscriptsubscript𝑡𝑖𝑟({\\color[rgb]{0,.5,.5}r}(t_{i}^{c}),{\\color[rgb]{0,.5,.5}r}(t_{i}^{r}))( italic_r ( italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT ) , italic_r ( italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT ) ) is interpreted as vectors. The sign of r (tic)−r (tir)𝑟superscriptsubscript𝑡𝑖𝑐𝑟superscriptsubscript𝑡𝑖𝑟{\\color[rgb]{0,.5,.5}r}(t_{i}^{c})-{\\color[rgb]{0,.5,.5}r}(t_{i}^{r})italic_r ( italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT ) - italic_r ( italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT ) indicates whether the RM s scores are aligned or not with human preferences in the dataset. (r¯ (tic),r¯ (tir))¯𝑟superscriptsubscript𝑡𝑖𝑐¯𝑟superscriptsubscript𝑡𝑖𝑟({\\color[rgb]{.75,.5,.25}\\underline{r}}(t_{i}^{c}),{\\color[rgb]{.75,.5,.25}%\n\\underline{r}}(t_{i}^{r}))( under¯ start_ARG italic_r end_ARG ( italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT ) , under¯ start_ARG italic_r end_ARG ( italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT ) ) denotes the reward vectors assigned by ℛ¯¯ℛ\\underline{\\mathcal{R}}under¯ start_ARG caligraphic_R end_ARG. [4] Evaluation Report for Anthropic/hh Alignment Dataset x OpenAssistant RM Alignment Pipeline: Results of the SEAL methodology applied to an open-source alignment pipeline purposed to render base models more helpful and harmless. (Feature Imprint) By regressing rewards against binary features indicators, we estimate that top features driving rewards are harmlessness, privacy-preserving, helpfulness, eloquence and sentiment. A feature imprint of β (harmlessness)=2.09𝛽harmlessness2.09\\beta(\\text{harmlessness})=2.09italic_β ( harmlessness ) = 2.09 implies that harmless text has a reward 2.092.092.092.09 points higher than harmful text. (Alignment Resistance) More than one out of four pairs in the alignment dataset have r (tic)r¯ (tir)¯𝑟superscriptsubscript𝑡𝑖𝑐¯𝑟superscriptsubscript𝑡𝑖𝑟{\\color[rgb]{.75,.5,.25}\\underline{r}}(t_{i}^{c})>{\\color[rgb]{.75,.5,.25}%\n\\underline{r}}(t_{i}^{r})under¯ start_ARG italic_r end_ARG ( italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT ) > under¯ start_ARG italic_r end_ARG ( italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT ) and r (tic)r¯ (tir)¯𝑟superscriptsubscript𝑡𝑖𝑐¯𝑟superscriptsubscript𝑡𝑖𝑟{\\color[rgb]{.75,.5,.25}\\underline{r}}(t_{i}^{c})>{\\color[rgb]{.75,.5,.25}%\n\\underline{r}}(t_{i}^{r})under¯ start_ARG italic_r end_ARG ( italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT ) > under¯ start_ARG italic_r end_ARG ( italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT ) and r (tic)r¯ (tir)¯𝑟superscriptsubscript𝑡𝑖𝑐¯𝑟superscriptsubscript𝑡𝑖𝑟{\\color[rgb]{.75,.5,.25}\\underline{r}}(t_{i}^{c})>{\\color[rgb]{.75,.5,.25}%\n\\underline{r}}(t_{i}^{r})under¯ start_ARG italic_r end_ARG ( italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT ) > under¯ start_ARG italic_r end_ARG ( italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT ) and r (tic)r¯ (tir)¯𝑟superscriptsubscript𝑡𝑖𝑐¯𝑟superscriptsubscript𝑡𝑖𝑟{\\color[rgb]{.75,.5,.25}\\underline{r}}(t_{i}^{c})>{\\color[rgb]{.75,.5,.25}%\n\\underline{r}}(t_{i}^{r})under¯ start_ARG italic_r end_ARG ( italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT ) > under¯ start_ARG italic_r end_ARG ( italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT ) and r (tic)