Running 1 On Device Vs Cloud Llm Inference ๐ Evaluate language model performance in-browser vs. cloud