ns commited on
Commit
8d647a5
·
1 Parent(s): 4f62088
Files changed (2) hide show
  1. .github/workflows/main.yml +31 -0
  2. README.md +3 -1
.github/workflows/main.yml ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # This is a basic workflow to help you get started with Actions
2
+
3
+ name: CI
4
+
5
+ # Controls when the workflow will run
6
+ on:
7
+ # Triggers the workflow on push or pull request events but only for the main branch
8
+ push:
9
+ branches: [ main ]
10
+ pull_request:
11
+ branches: [ main ]
12
+
13
+ # Allows you to run this workflow manually from the Actions tab
14
+ workflow_dispatch:
15
+
16
+ # A workflow run is made up of one or more jobs that can run sequentially or in parallel
17
+ jobs:
18
+ # This workflow contains a single job called "build"
19
+ build:
20
+ # The type of runner that the job will run on
21
+ runs-on: ubuntu-latest
22
+
23
+ # Steps represent a sequence of tasks that will be executed as part of the job
24
+ steps:
25
+
26
+ - name: Checkout code
27
+ uses: actions/checkout@v2
28
+
29
+ # Runs a single command using the runners shell
30
+ - name: docker compose
31
+ run: docker-compose up -d
README.md CHANGED
@@ -2,6 +2,8 @@
2
 
3
  tl;dr you should include text inputs along with images
4
 
 
 
5
  Available @ https://nathansutton-prerad.hf.space
6
 
7
  Machine learning in radiology has come a long way. For a long time the goal was simply to make a probability estimates of different conditions available to the radiologist at the time of interpretation. As evidence, see any of the hundreds of AI vendors that have commercialized computer vision algorithms. On the academic frontier, recent advances have made it possible to generate realistic sounding radiology reports directly from an image. The first paper I found describing such a model was from 2017, but there have been many more recently with the onset of transformers. However, every example I have found suffers from the same structural problem. 
@@ -68,7 +70,7 @@ docker-compose run train
68
  | |-- etl # transforms raw data from physionet into jsonlines files
69
  | |-- jupyter # interactive notebooks
70
  | |-- physionet # download the MIMIC-CXR and MIMIC-CXR-JPG data from physionet
71
- | |-- prerad # a small streamlit application to demo the model functionality
72
  |-- volumes # persistent data
73
  | |-- notebooks # jupyter notebooks persisted here
74
  | |-- physionet # physionet data is persisted here
 
2
 
3
  tl;dr you should include text inputs along with images
4
 
5
+ [![YourActionName Actions Status](https://github.com/nathansutton/prerad/workflows/CI/badge.svg)](https://github.com/nathansutton/prerad/actions)
6
+
7
  Available @ https://nathansutton-prerad.hf.space
8
 
9
  Machine learning in radiology has come a long way. For a long time the goal was simply to make a probability estimates of different conditions available to the radiologist at the time of interpretation. As evidence, see any of the hundreds of AI vendors that have commercialized computer vision algorithms. On the academic frontier, recent advances have made it possible to generate realistic sounding radiology reports directly from an image. The first paper I found describing such a model was from 2017, but there have been many more recently with the onset of transformers. However, every example I have found suffers from the same structural problem. 
 
70
  | |-- etl # transforms raw data from physionet into jsonlines files
71
  | |-- jupyter # interactive notebooks
72
  | |-- physionet # download the MIMIC-CXR and MIMIC-CXR-JPG data from physionet
73
+ | |-- streamlit # a small streamlit application to demo the model functionality
74
  |-- volumes # persistent data
75
  | |-- notebooks # jupyter notebooks persisted here
76
  | |-- physionet # physionet data is persisted here