Instructions to use OEvortex/Wise-EMO-2B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use OEvortex/Wise-EMO-2B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="OEvortex/Wise-EMO-2B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("OEvortex/Wise-EMO-2B") model = AutoModelForCausalLM.from_pretrained("OEvortex/Wise-EMO-2B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use OEvortex/Wise-EMO-2B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "OEvortex/Wise-EMO-2B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OEvortex/Wise-EMO-2B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/OEvortex/Wise-EMO-2B
- SGLang
How to use OEvortex/Wise-EMO-2B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "OEvortex/Wise-EMO-2B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OEvortex/Wise-EMO-2B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "OEvortex/Wise-EMO-2B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OEvortex/Wise-EMO-2B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use OEvortex/Wise-EMO-2B with Docker Model Runner:
docker model run hf.co/OEvortex/Wise-EMO-2B
Wise-EMO-2B
Overview
Wise-EMO-2B is a 2.5 billion parameter conversational AI model that combines emotional intelligence capabilities from the EMO-2B model with insights from ancient Indian wisdom traditions. By integrating the wealth of knowledge from philosophies like Yoga, Vedanta, Buddhism, Jainism, Sikhism, Hinduism, and other indigenous spiritual practices, this model aims to provide emotionally resonant dialogue infused with profound philosophical perspectives.
Key Features
Emotional Intelligence: Inherited from EMO-2B, Wise-EMO-2B excels at perceiving and responding to emotional undertones with empathy, providing emotionally supportive responses.
Philosophical Wisdom: The model has been further finetuned on texts from ancient Indian wisdom traditions, allowing it to draw upon profound philosophical concepts when appropriate.
Holistic Perspective: Wise-EMO-2B offers a holistic viewpoint that combines emotional resonance with spiritual and philosophical wisdom for a well-rounded, insightful conversational experience.
Dynamic Contextualization: The model adapts its communication style, emotional responses, and philosophical framing based on the specific conversational context.
Use Cases
Wise-EMO-2B can be beneficial for applications that require emotionally intelligent yet philosophically grounded dialogue, such as:
- Emotional support companions with a wisdom-oriented perspective
- Philosophical discussion and ideation
- Storytelling and creative writing with profound themes
- Personal growth, self-reflection, and mindfulness practice
- Exploring ancient wisdom in relation to modern issues
Limitations and Responsible Use
While powerful, Wise-EMO-2B is an AI model with inherent limitations. Its responses should be viewed as a supportive tool rather than a substitute for professional mental health services or spiritual guidance. Additionally, the model may reflect biases from its training data. Users should think critically and report concerning outputs.
As with all AI systems, Wise-EMO-2B should be used responsibly and ethically, particularly given the sensitivity around philosophy and spiritual beliefs. Outputs should be carefully evaluated in context.
This is mark one of Wise-EMO, It is just for testing purposes
- Downloads last month
- -