I closed all the other applications on my computer and was able to get the portal to work. I monitored the memory usage of the services and noticed that their memory usage was climbing slowly over time. It went from 1.5gb initially to 5gb and then started filling my swap.
The spark-service was logging this line:
spark-service | Re-format filesystem in Storage Directory root= /hadoop/hdfs/data/dfs/namenode; location= null ? (Y or N) Invalid input:
I've been trying to get data upload to work on my local gen3 installation, and ran into some issues similar to what's described in this post:
I added the self-signed certificate to trusted certificates on my OS (Ubuntu). After a few tries I was able to configure a profile using the following command: gen3-client configure --profile=zander --cred=~/compose-services/credentials.json --apiendpoint=https://localhost
When I run gen3-client auth --profile=zander
However, when I run gen3-client upload --profile=zander --upload-path=~/Documents/text.txt
I get the following error:
Looks like fence is not happy trying to reach AWS buckets and getting errors. I will consult with developers on this and let you know. Do you experience the same with docker-compose down and then docker-compose up -d ?
From your new logs I see Fence complains about AWS buckets and fence-config does not contain aws_access_key_id and aws_secret_access_key fields in the AWS_CREDENTIALS block. Could you please modify fence-config.yaml as described in Step 3 in the post above and see if it helps?