| | --- |
| | library_name: transformers |
| | tags: [] |
| | --- |
| | |
| |
|
| | ## Model Description |
| |
|
| | <!-- Provide a longer summary of what this model is/does. --> |
| | LoRA adapter weights from fine-tuning [BioMobileBERT](https://huggingface.co/nlpie/bio-mobilebert) on the MIMIC-III mortality prediction task. The [PEFT](https://github.com/huggingface/peft) was used and the model was trained for a maximum of 5 epochs with early stopping, full details can be found at the [github repo](https://github.com/nlpie-research/efficient-ml). |
| |
|
| | <!-- - **Developed by:** Niall Taylor --> |
| | <!-- - **Shared by [Optional]:** More information needed --> |
| | - **Model type:** Language model LoRA adapter |
| | - **Language(s) (NLP):** en |
| | - **License:** apache-2.0 |
| | - **Parent Model:** BioMobileBERT |
| | - **Resources for more information:** |
| | - [GitHub Repo](https://github.com/nlpie-research/efficient-ml) |
| | - [Associated Paper](https://arxiv.org/abs/2402.10597) |
| |
|
| | <!-- # Uses --> |
| |
|
| | <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> |
| |
|
| | <!-- ## Direct Use --> |
| |
|
| | <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> |
| | <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." --> |
| |
|
| |
|
| |
|
| |
|
| | <!-- ## Downstream Use [Optional] --> |
| |
|
| | <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> |
| | <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." --> |
| | |
| |
|
| | # How to Get Started with the Model |
| |
|
| | Use the code below to get started with the model. |
| |
|
| | <details> |
| | <summary> Click to expand </summary> |
| |
|
| |
|
| | ```python |
| | from peft import AutoPeftModelForCausalLM, AutoPeftModelForSequenceClassification |
| | from transformers import AutoTokenizer |
| | |
| | model_name = "NTaylor/bio-mobilebert-mimic-mp-lora" |
| | |
| | # load using AutoPeftModelForSequenceClassification |
| | model = AutoPeftModelForSequenceClassification.from_pretrained(lora_id) |
| | |
| | # use base llama tokenizer |
| | tokenizer = AutoTokenizer.from_pretrained("nlpie/bio-mobilebert") |
| | |
| | # example input |
| | text = "Clinical note..." |
| | inputs = tokenizer(text, return_tensors="pt") |
| | outputs = reloaded_model(**inputs) |
| | # extract prediction from outputs based on argmax of logits |
| | pred = torch.argmax(outputs.logits, axis = -1) |
| | print(f"Prediction is: {pred}") # binary classification: 1 for mortality |
| | ``` |
| |
|
| |
|
| | </details> |
| |
|
| | ## Out-of-Scope Use |
| |
|
| | <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> |
| | <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." --> |
| |
|
| | This model and LoRA weights were trained on the MIMIC-III dataset and are not intended for use on other datasets, nor be used in any real clinical setting. The experiments were conducted as a means of exploring the potential of LoRA adapters for clinical NLP tasks, and the model should not be used for any other purpose. |
| |
|
| |
|
| | <!-- # Bias, Risks, and Limitations --> |
| |
|
| | <!-- This section is meant to convey both technical and sociotechnical limitations. --> |
| |
|
| | <!-- Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. --> |
| |
|
| |
|
| | <!-- ## Recommendations --> |
| |
|
| | <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> |
| |
|
| |
|
| |
|
| |
|
| | <!-- |
| | # Training Details |
| |
|
| | ## Training Data |
| |
|
| | <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> |
| |
|
| | <!-- More information on training data needed --> |
| |
|
| |
|
| | <!-- ## Training Procedure --> |
| |
|
| | <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> |
| |
|
| | <!-- ### Preprocessing |
| |
|
| | More information needed --> |
| |
|
| | <!-- ### Speeds, Sizes, Times --> |
| |
|
| | <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> |
| |
|
| | <!-- More information needed --> |
| | |
| | <!-- # Evaluation --> |
| |
|
| | <!-- This section describes the evaluation protocols and provides the results. --> |
| |
|
| | <!-- ## Testing Data, Factors & Metrics |
| |
|
| | ### Testing Data --> |
| |
|
| | <!-- This should link to a Data Card if possible. --> |
| |
|
| | <!-- More information needed --> |
| |
|
| |
|
| | <!-- ### Factors --> |
| |
|
| | <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> |
| |
|
| | <!-- More information needed --> |
| |
|
| | <!-- ### Metrics --> |
| |
|
| | <!-- These are the evaluation metrics being used, ideally with a description of why. --> |
| |
|
| | <!-- More information needed --> |
| |
|
| | <!-- ## Results |
| |
|
| | More information needed |
| |
|
| | # Model Examination |
| |
|
| | More information needed |
| |
|
| | # Environmental Impact --> |
| |
|
| | <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> |
| | <!-- |
| | Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). |
| |
|
| | - **Hardware Type:** More information needed |
| | - **Hours used:** More information needed |
| | - **Cloud Provider:** More information needed |
| | - **Compute Region:** More information needed |
| | - **Carbon Emitted:** More information needed |
| |
|
| | # Technical Specifications [optional] |
| |
|
| | ## Model Architecture and Objective |
| |
|
| | More information needed |
| |
|
| | ## Compute Infrastructure |
| |
|
| | More information needed |
| |
|
| | ### Hardware |
| |
|
| | More information needed |
| |
|
| | ### Software |
| |
|
| | More information needed --> |
| |
|
| | # Citation |
| |
|
| | <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> |
| |
|
| | **BibTeX:** |
| | `````` |
| | @misc{taylor2024efficiency, |
| | title={Efficiency at Scale: Investigating the Performance of Diminutive Language Models in Clinical Tasks}, |
| | author={Niall Taylor and Upamanyu Ghose and Omid Rohanian and Mohammadmahdi Nouriborji and Andrey Kormilitzin and David Clifton and Alejo Nevado-Holgado}, |
| | year={2024}, |
| | eprint={2402.10597}, |
| | archivePrefix={arXiv}, |
| | primaryClass={cs.CL} |
| | } |
| | `````` |
| | <!-- **APA:** --> |
| |
|
| | <!-- More information needed --> |
| |
|
| | <!-- # Glossary [optional] --> |
| |
|
| | <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> |
| |
|
| | <!-- More information needed --> |
| |
|
| | <!-- # More Information [optional] --> |
| |
|
| | <!-- More information needed --> |
| |
|
| | <!-- # Model Card Authors [optional] --> |
| |
|
| | <!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. --> |
| |
|
| | <!-- More information needed --> |
| |
|
| | <!-- # Model Card Contact --> |
| |
|
| | <!-- More information needed --> |
| |
|
| |
|
| |
|
| |
|