Issue with Compose Services

Hi Zander!

We reproduced the issue you are running into and created a story to fix it. Thank you! I will let you know the progress.

1 Like

Hey @Viktorija could you send me an invite to the slack channel? Thanks!

We sent you an invite @Zander_Giuffrida :slight_smile:

Hi @Zander_Giuffrida!

Our developers suggest to run this command before starting Compose Services:

sudo sysctl -w vm.max_map_count=262144

This would help to start esproxy-service. Could you try this and let me know the result?

Hey @Viktorija,

Yes that seems to be working. I'll let you know if I run into any other issues.



1 Like

Hello @Viktorija,

I've been trying to get data upload to work on my local gen3 installation, and ran into some issues similar to what's described in this post:

I added the self-signed certificate to trusted certificates on my OS (Ubuntu). After a few tries I was able to configure a profile using the following command:
gen3-client configure --profile=zander --cred=~/compose-services/credentials.json --apiendpoint=https://localhost

When I run
gen3-client auth --profile=zander
it returns:
Screenshot from 2020-02-04 13-42-17

However, when I run
gen3-client upload --profile=zander --upload-path=~/Documents/text.txt
I get the following error:

Could you help me resolve this issue?



File submission is not fully working in Compose Services, but it should not give you Internal Server Error. Could you share your logs?

Also I wonder if you are able to create program/project in your commons like described here

Hello @Viktorija. Sorry for the delay in my response!

It seems that I'm unable to get the portal to show up on localhost again. Here's a link to the zip I generated from the script.

Hi @Zander_Giuffrida!

Looks like fence is not happy trying to reach AWS buckets and getting errors. I will consult with developers on this and let you know. Do you experience the same with docker-compose down and then docker-compose up -d ?

Hi @Zander_Giuffrida!

Could you please try this:

  1. Shutdown compose services with docker-compose down
  2. Comment out these lines in the ngninx.conf:
#        location /guppy/ {
#                proxy_pass http://guppy-service/;
#        }
  1. Modify the AWS credentials block and add aws_access_key_id: '' and aws_secret_access_key: '' to credentials in the fence-config.yaml
    aws_access_key_id: ''
    aws_secret_access_key: ''
    aws_access_key_id: ''
    aws_secret_access_key: ''
    cred: 'CRED1'
    cred: 'CRED2'
    cred: '*'
    cred: 'CRED1'
    role-arn: 'arn:aws:iam::role1'
  1. Start docker compose with docker-compose up -d

Please let me know the result :slight_smile:

Hey @Viktorija

I'm able to get the login screen to come up, but when I try to login it throws an Internal Server Error.

Here's a link to the logs

Hi @Zander_Giuffrida,

From your new logs I see Fence complains about AWS buckets and fence-config does not contain aws_access_key_id and aws_secret_access_key fields in the AWS_CREDENTIALS block. Could you please modify fence-config.yaml as described in Step 3 in the post above and see if it helps?

Hey @Viktorija

Those fields were already present in fence-config.yaml. Here's a screenshot of it.

It's interesting that fence didn't pick changes up. Did you change fence-config.yaml in the templates folder or in Secrets folder? The change should go to Secrets/fence-config.yaml

I changed Secrets/fence-config.yaml

Do I understand correctly, if you re-run docker compose with modified fence-config.yaml on your new logs you don't see your changes to fence-config.yaml? I will check with developers what's happening.

Oh, you are right! script removes creds information for privacy reasons:

I've pulled the latest images but still get 'internal service error' when I try to login. Do you think it would be worthwhile to try a clean install?

I would try :slight_smile:
I asked our developers to look at your logs again, it might take some time and I'll let you know their opinion on what could break the work.

Unfortunately the clean install didn't work. Here are the latest logs.