I am having trouble with running Gen3 using compose-services locally. The first time I ran the services after following the setup instructions things worked fine. I could login in with my google account and use the data portal with no disconnects / crashes.
However, when I tried to run the services a second time (a day later after composing down) nothing would show up on localhost. I did a clean install, and after trying to startup multiple times I only got it to load once but it was buggy. I attempted another clean install but things don't seem to be working.
Thanks for providing your dump. Are you using Docker Desktop? I'm asking because our developers mentioned a very similar problem was caused by Docker Desktop settings: it was set to make 4 CPUs available to the Docker Engine, and start working after limiting it to 1 CPU in Docker > Preferences > Advanced. Here is the relevant log from logs-esproxy-service.txt:
[36mesproxy-service |[0m ERROR: [1] bootstrap checks failed
[36mesproxy-service |[0m [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[36mesproxy-service |[0m [2020-01-16T13:59:32,475][INFO ][o.e.n.Node ] [cogtRnY] stopping ...
[36mesproxy-service |[0m [2020-01-16T13:59:32,937][INFO ][o.e.n.Node ] [cogtRnY] stopped
[36mesproxy-service |[0m [2020-01-16T13:59:32,938][INFO ][o.e.n.Node ] [cogtRnY] closing ...
[36mesproxy-service |[0m [2020-01-16T13:59:33,030][INFO ][o.e.n.Node ] [cogtRnY] closed
I closed all the other applications on my computer and was able to get the portal to work. I monitored the memory usage of the services and noticed that their memory usage was climbing slowly over time. It went from 1.5gb initially to 5gb and then started filling my swap.
The spark-service was logging this line:
spark-service | Re-format filesystem in Storage Directory root= /hadoop/hdfs/data/dfs/namenode; location= null ? (Y or N) Invalid input:
I've been trying to get data upload to work on my local gen3 installation, and ran into some issues similar to what's described in this post:
I added the self-signed certificate to trusted certificates on my OS (Ubuntu). After a few tries I was able to configure a profile using the following command: gen3-client configure --profile=zander --cred=~/compose-services/credentials.json --apiendpoint=https://localhost
When I run gen3-client auth --profile=zander
it returns:
However, when I run gen3-client upload --profile=zander --upload-path=~/Documents/text.txt
I get the following error:
Looks like fence is not happy trying to reach AWS buckets and getting errors. I will consult with developers on this and let you know. Do you experience the same with docker-compose down and then docker-compose up -d ?
From your new logs I see Fence complains about AWS buckets and fence-config does not contain aws_access_key_id and aws_secret_access_key fields in the AWS_CREDENTIALS block. Could you please modify fence-config.yaml as described in Step 3 in the post above and see if it helps?
It's interesting that fence didn't pick changes up. Did you change fence-config.yaml in the templates folder or in Secrets folder? The change should go to Secrets/fence-config.yaml