Stream2LLM Dataset
Dataset for the paper: Stream2LLM: Overlap Context Streaming and Prefill for Reduced Time-to-First-Token (MLSys 2026 artifact evaluation). Contains workload traces, experiment run logs, and performance model measurements used to produce all figures, tables, and inline numbers in the paper.
This repository is a git submodule of the main Stream2LLM artifact (branch: mlsys_artifact). Clone the parent repo with --recurse-submodules to automatically fetch this data.
Directory Structure
data/
βββ anns/ # ANNS workload data
β βββ res/ # 4,997 pipeline trace CSVs
β βββ retrieved_corpus_content.*.json # Corpus content shards
β βββ query_trace_map_5k.json # Query-to-trace mapping
β βββ compute_workload_stats.py # Workload statistics script
βββ crawl/ # Crawler workload data
β βββ traces/simpleQA_ALL/ # 4,322 query trace CSVs
β βββ compute_workload_stats.py # Workload statistics script
βββ perf_model/ # Performance model measurements
β βββ recomputation/ # 7 recomputation latency JSONs
β βββ swap/ # 11 swap latency JSONs
βββ run_log/ # Experiment run logs
βββ crawler/ # 5 crawler experiment configurations
βββ anns/ # 5 ANNS experiment configurations
Workload Traces
ANNS (anns/res/)
4,997 pipeline trace files from approximate nearest neighbor search workloads. Each file is named _L10000_W8_query<ID>_pipeline_trace.csv and contains:
| Column | Description |
|---|---|
StartTime_us |
Chunk start time in microseconds |
EndTime_us |
Chunk end time in microseconds |
StartIteration |
Starting iteration index |
EndIteration |
Ending iteration index |
PipelinePool |
Tuple of candidate IDs in the pipeline pool |
Each row represents a chunk arrival β a batch of ANNS iterations that produces new candidate results. Queries have 1β26 chunks (median 4), with inter-chunk arrival times ranging from sub-millisecond to ~9 seconds (median 37 ms).
Crawler (crawl/traces/simpleQA_ALL/)
4,322 query trace files from a web crawling workload (SimpleQA question-answering). Each file is named query_<ID>.csv and contains:
| Column | Description |
|---|---|
type |
Event type: tavily_search or page_scrape |
startTime |
Event start time in seconds |
endTime |
Event end time in seconds |
query |
The original query string (on search rows) |
links_found |
Number of links returned by search |
url |
URL scraped (on page_scrape rows) |
content_length |
Length of scraped content in characters |
content |
Scraped page text |
link_idx |
Index of the link being scraped |
Each row is a chunk event. The first row is typically a tavily_search followed by page_scrape events. Queries have 1β17 chunks (median 8), with inter-chunk arrival times of 3 ms to ~35 seconds (median 701 ms).
Performance Model (perf_model/)
Microbenchmark measurements for KV cache eviction cost modeling, collected across multiple GPU types (A40, A100, H100, H200) and model configurations (8B, 70B with varying tensor parallelism).
recomputation/: Recomputation latency in ms, keyed by number of tokens recomputed (e.g., 16, 32, ..., 8192). Each key maps to an array of repeated measurements.swap/: Swap (CPUβGPU transfer) latency in ms, same key structure. Files ending in_kernel_latency.jsoncontain kernel-only timings; others include end-to-end transfer overhead.
Hardware configurations: A40, A100, H100_tp2, H100_tp4_70B, H200_tp2, H200_tp4_70B.
Run Logs (run_log/)
Experiment outputs from the Stream2LLM serving system, organized by workload and configuration.
Structure
run_log/<workload>/<config>/<scheduler>/<timestamp>/
βββ config_<timestamp>.yaml # Experiment configuration
βββ run_metrics.csv # Per-event metrics log
Configurations
| Workload | Config Directory | Description |
|---|---|---|
| Crawler | H200_enhanced_schedulers_v1_full |
Standard H200 runs |
| Crawler | H200_enhanced_schedulers_v1_full_delay_10 |
With 10ms artificial delay (memory pressure) |
| Crawler | H200_..._delay_10_recomp_only |
Delay + recomputation-only eviction |
| Crawler | H200_..._delay_10_swap_only |
Delay + swap-only eviction |
| Crawler | H100_enhanced_schedulers_v1_full |
Standard H100 runs |
| ANNS | H200_enhanced_schedulers_v1_full |
Standard H200 runs |
| ANNS | H200_enhanced_schedulers_v1_500q_delay_30 |
With 30ms artificial delay (memory pressure) |
| ANNS | H200_..._delay_30_recomp_only |
Delay + recomputation-only eviction |
| ANNS | H200_..._delay_30_swap_only |
Delay + swap-only eviction |
| ANNS | H100_enhanced_schedulers_v1_full |
Standard H100 runs |
Schedulers
Each configuration contains results for four scheduling policies:
default_vllm: Default vLLM scheduler (baseline)fcfs: First-come-first-servedlcas: Last-chunk-arrival-stamp schedulermcps: Most-chunks-processed scheduler
Run Metrics CSV
The run_metrics.csv file logs timestamped events for each experiment run:
| Column | Description |
|---|---|
event_timestamp |
Unix timestamp of the event |
event_type |
Event type (e.g., replay_start, query_delay, chunk_sent, response_received) |
query_id |
Query identifier |
request_id |
vLLM request identifier |
stream |
Whether streaming input was enabled |
concurrency |
Whether concurrent requests were enabled |
duration_secs |
Duration of the event in seconds |
details |
Additional event-specific information |
request_size |
Size of the request in tokens |
concurrent_requests |
Number of concurrent requests at event time |
replay_rate |
Poisson arrival rate used |
prev_event_type |
Previous event type for this query |
Corpus Content (anns/)
The ANNS corpus is stored as sharded JSON files (retrieved_corpus_content.*.json and retrieved_corpus_content.part.*.json). Each file maps document IDs to their text content, used for constructing input sequences from ANNS retrieval results.
The query_trace_map_5k.json file maps query IDs to their corresponding pipeline trace filenames and query text.
- Downloads last month
- 2