I'm looking into setting up a data commons to host data for a collaborative project my lab's PI is involved in. Following the docker-compose instructions I have a local installation running on my laptop, but the Workspace tab takes me to a page that reports: '401 Authorization Required'
I can see that the Jupyter service is running w/ healthy status, and everything else seems to be working as I expected for this fresh instance. Docker logs jupyter-service returns:
++++
Container must be run with group "root" to update passwd file
Executing the command: jupyter notebook
[W 19:45:48.031 NotebookApp] base_project_url is deprecated, use base_url
[I 19:45:48.070 NotebookApp] Writing notebook server cookie secret to /home/jovyan/.local/share/jupyter/runtime/notebook_cookie_secret
[W 19:45:49.897 NotebookApp] All authentication is disabled. Anyone who can connect to this server will be able to run code.
[I 19:45:50.227 NotebookApp] JupyterLab extension loaded from /opt/conda/lib/python3.6/site-packages/jupyterlab
[I 19:45:50.227 NotebookApp] JupyterLab application directory is /opt/conda/share/jupyter/lab
[I 19:45:50.295 NotebookApp] Serving notebooks from local directory: /home/jovyan
[I 19:45:50.296 NotebookApp] The Jupyter Notebook is running at:
[I 19:45:50.296 NotebookApp] http://(9b10e38926d2 or 127.0.0.1):8888/lw-workspace/
[I 19:45:50.296 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
++++
Can anyone help me figure out what I've done wrong?
I tried this and am still getting the '401 Authorization Required' error message. The nginx.conf file was already exactly as you displayed it, I didn't have to change that. And here is the user.yaml file after adding the workspace policy as you suggested:
authz:
# policies automatically given to anyone, even if they are not authenticated
anonymous_policies:
- open_data_reader
# policies automatically given to authenticated users (in addition to their other policies)
all_users_policies: []
groups:
# can CRUD programs and projects and upload data files
- name: data_submitters
policies:
- services.sheepdog-admin
- data_upload
- MyFirstProject_submitter
users:
- jmartin77777@gmail.com
# can create/update/delete indexd records
- name: indexd_admins
policies:
- indexd_admin
users:
- jmartin77777@gmail.com
resources:
- name: workspace
- name: data_file
- name: services
subresources:
- name: sheepdog
subresources:
- name: submission
subresources:
- name: program
- name: project
- name: open
- name: programs
subresources:
- name: MyFirstProgram
subresources:
- name: projects
subresources:
- name: MyFirstProject
policies:
- id: workspace
description: be able to use workspace
resource_paths:
- /workspace
role_ids:
- workspace_user
- id: data_upload
description: upload raw data files to S3
role_ids:
- file_uploader
resource_paths:
- /data_file
- id: services.sheepdog-admin
description: CRUD access to programs and projects
role_ids:
- sheepdog_admin
resource_paths:
- /services/sheepdog/submission/program
- /services/sheepdog/submission/project
- id: indexd_admin
description: full access to indexd API
role_ids:
- indexd_admin
resource_paths:
- /programs
- id: open_data_reader
role_ids:
- reader
- storage_reader
resource_paths:
- /open
- id: all_programs_reader
role_ids:
- reader
- storage_reader
resource_paths:
- /programs
- id: MyFirstProject_submitter
role_ids:
- reader
- creator
- updater
- deleter
- storage_reader
- storage_writer
resource_paths:
- /programs/MyFirstProgram/projects/MyFirstProject
roles:
- id: file_uploader
permissions:
- id: file_upload
action:
service: fence
method: file_upload
- id: workspace_user
permissions:
- id: workspace_access
action:
service: jupyterhub
method: access
- id: sheepdog_admin
description: CRUD access to programs and projects
permissions:
- id: sheepdog_admin_action
action:
service: sheepdog
method: '*'
- id: indexd_admin
description: full access to indexd API
permissions:
- id: indexd_admin
action:
service: indexd
method: '*'
- id: admin
permissions:
- id: admin
action:
service: '*'
method: '*'
- id: creator
permissions:
- id: creator
action:
service: '*'
method: create
- id: reader
permissions:
- id: reader
action:
service: '*'
method: read
- id: updater
permissions:
- id: updater
action:
service: '*'
method: update
- id: deleter
permissions:
- id: deleter
action:
service: '*'
method: delete
- id: storage_writer
permissions:
- id: storage_creator
action:
service: '*'
method: write-storage
- id: storage_reader
permissions:
- id: storage_reader
action:
service: '*'
method: read-storage
clients:
wts:
policies:
- all_programs_reader
- open_data_reader
users:
jmartin77777@gmail.com:
tags:
name: John Martin
email: jmartin77777@gmail.com
policies:
- MyFirstProject_submitter
- workspace
username2:
tags:
name: John Doe
email: johndoe@gmail.com
policies:
- MyFirstProject_submitter
cloud_providers: {}
groups: {}
I am not fluent in yaml so I am not 100% sure you meant to add the workspace policy directly under jmartin77777@gmail.com in the users: block, or if you mean to include it at the end, which to me looked like that would assign it to the 'username2:' entry.
Another issue I am working on is that I haven't currently setup hosting for this local docker-compose setup. My AWS account is through my university, and they did not give me permission to setup an IAM that I could use to build a secure s3 bucket. I've got a request in for that but as I run this now there is no s3 bucket behind this instance. Could that be the reason that the workspace is unable to start? Does it need to host workspaces inside a mounted s3 bucket? Or can it use space internal to the container?
Ah, because I was making some other changes in the gitops.json file, and I am still not sure how to sync a json file into the running container, I just stopped everything & cleaned up and then re-ran docker-compose up from scratch (after making the changes discussed). I think I am doing a lot of things inefficiently at the moment, but as I try to understand the system I find some comfort in sticking with things that I know are working for me
What I am hoping I can do eventually is figure out how to mount local directories to my docker-compose instance so I can just change files in place. But one thing at a time...