butlerpeter

Members
  • Posts

    192
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

butlerpeter's Achievements

Explorer

Explorer (4/14)

0

Reputation

  1. Just wanted to come and confirm that the new docker stop/start script functionality worked great. Thanks for adding that.
  2. Knowing that the script got called (or not) should be enough for now. Thanks.
  3. Thanks for that - I've just put my scripts in place - will see what happens next time containers are updated. Incidentally - does anything get logged anywhere when the scripts are called?
  4. Is there any possibility for the ability to run a custom script after dockers have been updated? I have a self built docker container which doesn't get updated very often. But it depends on a mariadb container that does get updated. When the weekly docker update process happens, if the mariadb container has been updated then my custom one falls over. It's a simple fix for me to ssh in and restart that container, but if it was possible to script it then it would save me having to (remember to) check.
  5. I wouldn't try and use that one, it's rather hacky and geared towards my personal setup. I only pushed it to the docker hub to ease reinstallation for myself.
  6. Excellent news. Look forward to it being merged and released.
  7. Recently had a bit of a strange issue with the mariadb container. It's had been running fine for weeks, ever since migrating to it from a container from another author. Yesterday I did the upgrade to unRAID 6.1.7, so during the course of stopping the array to reboot the dockers were stopper. The upgrade went to plan, rebooted the server, started the array again and all of my docker containers came back up - or so I thought. It was only some hours later that I noticed something was wrong with the mariadb container. Looking in the logs (which I don't have to hand unfortunately) I saw continuous, repeated, failed attempts to start mariadb. There was a log message about it having not been shutdown cleanly (sorry I don't have the exact text), with messages about recovering from a crash, and then each attempt to start was followed by a message saying that the table mysql.users didn't exist (again sorry for not having the exact text). Looking at the /config/databases folder I saw that the owner of the mysql directory had been changed from 'nobody' to '103' - 103 seems to be the uid of the mysql user inside the container. "chown -R nobody:users mysql" fixed the complaint about the mysql.users table. But then there was a similar message about another mysql.*something* table, and when I looked the owner of the mysql folder had changed to 103 again. Changing the owner back to nobody this time fixed things and mariadb started up correctly. I suspect that what happened was that there was an unclean shutdown (of mariab) and when starting up again it attempted to recover and during that process tries to ensure that it has correct access so changes the owner of the folder to the mysql user. Which then leads to access problems and stops it accessing those mysql.* tables. I wonder if the mariadb startup script "/usr/bin/mysqld_safe" (I think!) should be changed to take into account the uid that has been specified for the container to run as, instead of just using the mysql user? Hope that makes sense!
  8. I get what you're saying about the port. But the unRAID gui is running on port 80 on the host - not port 80 of the container. It's unlikely that anybody will have their unRAID gui exposed externally on port 80. Most likely, as in my case, they might have incoming traffic on port 80 redirected to another port on the server at the router level. In my case I map container port 80 to host port 9080 (for example), then in my router redirect incoming port 80 traffic to port 9080 on my server. Maybe an env variable in the container to specify which method should be used could be a solution.
  9. Thanks I clicked the remove button and removed the sub domain field. Have found a couple of issues though. Firstly, after getting it up and running I ran a 'docker logs Nginx-letsencrypt' and saw a lot of runsv memcached: fatal: unable to start ./run: access denied runsv php-fpm: fatal: unable to start ./run: access denied runsv php-fpm: fatal: unable to start ./run: access denied runsv memcached: fatal: unable to start ./run: access denied runsv php-fpm: fatal: unable to start ./run: access denied runsv memcached: fatal: unable to start ./run: access denied runsv php-fpm: fatal: unable to start ./run: access denied I had to enter the container 'docker exec -it Nginx-letsencrypt /bin/bash' and chmod +x the /etc/services/php-fpm/run and /etc/services/memcached/run files. I also had issues with the container generating the certificates, because the letsencrypt server couldn't connect back to the client to verify the domain. That was due to my using port 443 for ssh access (so I can access my server through work proxy), so am unable to redirect incoming ssl to that port. To get around it I had to enter the container again and modify '/defaults/letsencrypt.sh' to change the standalone supported challenge mode to http-01 instead of tls-sni-01. After doing all of that, it seems to be working - I can get to the default landing page on http via the host port mapped to container port 80 and also on https via the port mapped to container port 443. Now I just need to configure nginx properly.
  10. aptalca, what if you don't want to specify any subdomains e.g. you want a certificate to cover example.com and not www.example.com? I tried leaving the subdomain field empty but got a "this is a required field" message.
  11. As posted in the KVM forum, in the PXE booting OpenELEC thread. I've created a container, based off of sparklyballs tftpdserver dockerfile that runs dnsmasq configured to proxy dns/dhcp to an existing service (e.g. a router) and which provides the tftp server required for pxe booting. I've not created an unRAID template or repository, but the link to it on the docker hub is https://registry.hub.docker.com/u/butlerpeter/dnsmasq-docker-unraid/
  12. I haven't gotten around to creating an unRAID repository yet, but I have now set my dnsmasq docker container up on the docker registry. The link is https://registry.hub.docker.com/u/butlerpeter/dnsmasq-docker-unraid/