[DOCKER] Pynab NZB Indexer


Recommended Posts

In beta repo.

 

https://github.com/Murodese/pynab

 

NOTE:  on first run it has to generate a database structure and download some dummy data to do that with.

 

this could take an hour or more , don't stop the container until it's finished.

 

 

there are several variables that you have to click advanced to get to and edit.

 

news_server :- the address of your usenet server provider

 

news_user : - your username for your usenet account

 

news_passwd :- password for usenet account

 

news_port :- which port you connect to usenet account with

 

news_ssl :- set to 0 if your usenet service doesn't use SSL, 1 if it does

 

regex_url :- pre-filled, but if you have a better one in sql format, enter here

 

backfill_days :- number of days you want to backfill

 

 

other config options are in /config and include the main config.py file and a json (groups.json) file to which you can add/remove newsgroups.

 

 

it generates a default api admin key of

 

303c5cd0b18e9ebe093e9b9dae3d3c74

 

 

to do list

 

work out a script for adding users etc....

 

possibly adding more variables in the template.

 

 

 

thanks to

 

Squid for the json parsing script to enable/disable groups.

Link to comment
  • Replies 92
  • Created
  • Last Reply

Top Posters In This Topic

Thank you!! Gonna give this a try. There are a couple of "things" I think I will miss from either Newznab or nZEDb.

 

Main one is the predb stuff that "decodes" gibberish release names into readable text. This was used in Newznab and the settings for this was a closely guarded secret.  8)

 

In nZEDb this was handled via IRC Scraping - this required a complicated setup to get it to work. But in the end, in post-processing, the gibberish releases were properly renamed.

 

Not sure where to enter the NN+ code so I can get the latest regex... I know you upped something, but isn't this something you have to get regularly

 

I will have to try the import options from newznab for my users and apis, and passwords, etc.

 

I will try to import my current chunk of existing nzbs, to properly populate with data.

 

Big learning curve, but hopefully it will be well worth it.

 

Thank you Mr. Sparklyballs!!

 

Link to comment

got a thorny issue with this one and to a very much lesser extent the same issue arises with musicbrainz container.

 

when the container stops, due to the nature of docker it issues a command to stop service(s) and if they don't stop within 10 seconds it kills them.

 

postgresql can handle this due to it's fault tolerant nature and you lose only transactions in progress at shutdown, which for both containers doesn't matter as they'll just be redone anyways.

 

however the issue is, and this really affects the pynab container more, on restart postgres does a kind of repair routine and doesn't allow transactions on the database until it's done.

 

i thought that a setting in the postgres config file would have solved this (hot_standby = on) but it hasn't.

 

musicbrainz is going to be easier to fix as i can just increase the number of retries for the main program to start and the database will eventually be ready, but pynab i need to run some operations on live database before the main program comes up.

 

 

i am thinking on it and have three possibles.

 

1. increase the shutdown delay between sigterm and sigkill

 

2. find a test that will give me an error that the database  is busy and wrap it in a for while do loop

 

3. abandon my operations on the database before main program comes up (means loads of command line action for users though, and when i say loads i mean LOADS)

Link to comment

i've found my test.. not sure how to implement it yet though.

 

 

running

 

python3 /opt/pynab/pynab.py user list

 

gives this load of garbage if the database isn't ready

 

Traceback (most recent call last):
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 1033, in _do_get
    return self._pool.get(wait, self._timeout)
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/queue.py", line 145, in get
    raise Empty
sqlalchemy.util.queue.Empty

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 2069, in _wrap_pool_connect
    return fn()
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 376, in connect
    return _ConnectionFairy._checkout(self)
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 708, in _checkout
    fairy = _ConnectionRecord.checkout(pool)
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 480, in checkout
    rec = pool._do_get()
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 1049, in _do_get
    self._dec_overflow()
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/langhelpers.py", line 60, in __exit__
    compat.reraise(exc_type, exc_value, exc_tb)
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/compat.py", line 182, in reraise
    raise value
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 1046, in _do_get
    return self._create_connection()
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 323, in _create_connection
    return _ConnectionRecord(self)
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 449, in __init__
    self.connection = self.__connect()
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 602, in __connect
    connection = self.__pool._invoke_creator(self)
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/strategies.py", line 97, in connect
    return dialect.connect(*cargs, **cparams)
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/default.py", line 377, in connect
    return self.dbapi.connect(*cargs, **cparams)
  File "/usr/local/lib/python3.4/dist-packages/psycopg2/__init__.py", line 164, in connect
    conn = _connect(dsn, connection_factory=connection_factory, async=async)
psycopg2.OperationalError: FATAL:  the database system is starting up


The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/pynab/pynab.py", line 247, in <module>
    list_users()
  File "/opt/pynab/pynab.py", line 94, in list_users
    user_list = pynab.users.list()
  File "/opt/pynab/pynab/users.py", line 11, in list
    for user in users:
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/query.py", line 2515, in __iter__
    return self._execute_and_instances(context)
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/query.py", line 2528, in _execute_and_instances
    close_with_result=True)
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/query.py", line 2519, in _connection_from_session
    **kw)
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/session.py", line 882, in connection
    execution_options=execution_options)
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/session.py", line 887, in _connection_for_bind
    engine, execution_options)
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/session.py", line 334, in _connection_for_bind
    conn = bind.contextual_connect()
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 2034, in contextual_connect
    self._wrap_pool_connect(self.pool.connect, None),
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 2073, in _wrap_pool_connect
    e, dialect, self)
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 1403, in _handle_dbapi_exception_noconnection
    exc_info
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/compat.py", line 188, in raise_from_cause
    reraise(type(exception), exception, tb=exc_tb, cause=exc_value)
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/compat.py", line 181, in reraise
    raise value.with_traceback(tb)
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 2069, in _wrap_pool_connect
    return fn()
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 376, in connect
    return _ConnectionFairy._checkout(self)
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 708, in _checkout
    fairy = _ConnectionRecord.checkout(pool)
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 480, in checkout
    rec = pool._do_get()
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 1049, in _do_get
    self._dec_overflow()
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/langhelpers.py", line 60, in __exit__
    compat.reraise(exc_type, exc_value, exc_tb)
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/compat.py", line 182, in reraise
    raise value
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 1046, in _do_get
    return self._create_connection()
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 323, in _create_connection
    return _ConnectionRecord(self)
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 449, in __init__
    self.connection = self.__connect()
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 602, in __connect
    connection = self.__pool._invoke_creator(self)
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/strategies.py", line 97, in connect
    return dialect.connect(*cargs, **cparams)
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/default.py", line 377, in connect
    return self.dbapi.connect(*cargs, **cparams)
  File "/usr/local/lib/python3.4/dist-packages/psycopg2/__init__.py", line 164, in connect
    conn = _connect(dsn, connection_factory=connection_factory, async=async)
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) FATAL:  the database system is starting up

 

 

vs:-

 

this if it is

 

Email: admin@localhost	API Key: xxxxxxxxxxxxxxxxxxxx 	Grabs: 0

 

that list will change with added/changed users, but the

 

FATAL:  the database system is starting up

is constant whenever the database isn't ready.

 

some grep magic true/false for FATAL:  the database system is starting up , but i don't know how to do that yet.

 

 

grepping for the FATAL error is much more preferable because i can find a different test for musicbrainz but use the same grep test, as the database isn't ready thing will always be there, regardless of container use.

Link to comment

Thank you for posting about the error if it is still making the dbase.  I installed the docker before you put that post up and have been getting the error ever since.  I'm guessing i will just have to keep waiting for it to complete the dbase.  So far it has been going for about an hour.

Link to comment

Thank you for posting about the error if it is still making the dbase.  I installed the docker before you put that post up and have been getting the error ever since.  I'm guessing i will just have to keep waiting for it to complete the dbase.  So far it has been going for about an hour.

 

the current workaround is to start the container, let it run for about 4-5 minutes, stop it and start again.

 

if you don't see any garbage about database is busy in the logs, it's working.

 

 

i'm closing in on a proper fix. just need to teach myself some bash scripting i haven't done before.

Link to comment

this seems to be doing the trick...

 

# test database isn't rebuilding
echo "Testing whether database is ready"
until [ "$(python3 /opt/pynab/pynab.py user list 2>&1 >/dev/null | grep -ci Fatal:)" = "0" ]
do
echo "waiting....."
sleep 3s
done
echo "database appears ready, proceeding"

Link to comment

I noticed a few things with this.  I thought I was getting the same error as you, but I guess I was not.  Postgresql was not running on startup so it was not able to see the dbase.  I got that to start and then it did not have a pynab dbase.  I added that and so of course it does not have the pynab user.  I'm looking at adding the user and seeing if pynab works a little better.

Link to comment

I noticed a few things with this.  I thought I was getting the same error as you, but I guess I was not.  Postgresql was not running on startup so it was not able to see the dbase.  I got that to start and then it did not have a pynab dbase.  I added that and so of course it does not have the pynab user.  I'm looking at adding the user and seeing if pynab works a little better.

 

it should do all of those things on first run of the container.

Link to comment

I just tried it once more, make sure I deleted the config directory and data directory and still am not able to get PostgreSQL to startup and no database present.  I don't really know what I could be doing wrong?  I mean it basically does everything for you.  The script is in /etc/init.d so it should startup.  I also made sure it was marked as starting up and that was set.

Link to comment

I just tried it once more, make sure I deleted the config directory and data directory and still am not able to get PostgreSQL to startup and no database present.  I don't really know what I could be doing wrong?  I mean it basically does everything for you.  The script is in /etc/init.d so it should startup.  I also made sure it was marked as starting up and that was set.

 

the postgres script in /etc/init.d shouldn't be invoked inside a docker, that's for a regular linux environment.

 

the scripts involved are all in /etc/my_init.d and there are 5 of them in all prefixed by numbers 001-005

 

001, sets the time

 

002, shifts some config files around and applies settings from the template screen in unraid

 

003, brings up postgres, initialises an empty data structure, sets a user , brings up postgres proper and then does the initial data dump (which takes a long time)

 

004, sets some groups based on a json file

 

005, brings everything else up

 

 

 

Link to comment

Here is my log...

 

*** Running /etc/my_init.d/001-fix-the-time.sh...

 

Current default time zone: 'America/Los_Angeles'

Local time is now: Wed Jun 3 01:45:59 PDT 2015.

Universal Time is now: Wed Jun 3 08:45:59 UTC 2015.

 

*** Running /etc/my_init.d/002-set-the-config.sh...

config.js exists in /config, may require editing

config.py exists in /config, may require editing

groups.json exists in /config, may require editing

*** Running /etc/my_init.d/003-postgres-initialise.sh...

initialising empty databases in /data

completed initialisation

2015-06-03 01:46:06,085 CRIT Supervisor running as root (no user in config file)

2015-06-03 01:46:06,088 INFO supervisord started with pid 55

2015-06-03 01:46:07,091 INFO spawned: 'postgres' with pid 59

2015-06-03 01:46:07,103 INFO exited: postgres (exit status 2; not expected)

2015-06-03 01:46:08,105 INFO spawned: 'postgres' with pid 60

2015-06-03 01:46:08,117 INFO exited: postgres (exit status 2; not expected)

2015-06-03 01:46:10,121 INFO spawned: 'postgres' with pid 61

2015-06-03 01:46:10,133 INFO exited: postgres (exit status 2; not expected)

setting up pynab user and database

2015-06-03 01:46:13,138 INFO spawned: 'postgres' with pid 87

2015-06-03 01:46:13,150 INFO exited: postgres (exit status 2; not expected)

2015-06-03 01:46:14,151 INFO gave up: postgres entered FATAL state, too many start retries too quickly

pynab user and database created

building initial nzb import

THIS WILL TAKE SOME TIME, DO NOT STOP THE DOCKER

IMPORT COMPLETED

*** Running /etc/my_init.d/004-set-the-groups.sh...

Testing whether database is ready

database appears ready, proceeding

Traceback (most recent call last):

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 1033, in _do_get

return self._pool.get(wait, self._timeout)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/queue.py", line 145, in get

raise Empty

sqlalchemy.util.queue.Empty

 

During handling of the above exception, another exception occurred:

 

Traceback (most recent call last):

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 2069, in _wrap_pool_connect

return fn()

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 376, in connect

return _ConnectionFairy._checkout(self)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 708, in _checkout

fairy = _ConnectionRecord.checkout(pool)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 480, in checkout

rec = pool._do_get()

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 1049, in _do_get

self._dec_overflow()

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/langhelpers.py", line 60, in __exit__

compat.reraise(exc_type, exc_value, exc_tb)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/compat.py", line 182, in reraise

raise value

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 1046, in _do_get

return self._create_connection()

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 323, in _create_connection

return _ConnectionRecord(self)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 449, in __init__

self.connection = self.__connect()

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 602, in __connect

connection = self.__pool._invoke_creator(self)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/strategies.py", line 97, in connect

return dialect.connect(*cargs, **cparams)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/default.py", line 377, in connect

return self.dbapi.connect(*cargs, **cparams)

File "/usr/local/lib/python3.4/dist-packages/psycopg2/__init__.py", line 164, in connect

conn = _connect(dsn, connection_factory=connection_factory, async=async)

psycopg2.OperationalError: could not connect to server: No such file or directory

Is the server running locally and accepting

connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?

 

 

The above exception was the direct cause of the following exception:

 

Traceback (most recent call last):

File "/opt/pynab/pynab.py", line 258, in

group_list()

File "/opt/pynab/pynab.py", line 177, in group_list

groups = pynab.groupctl.group_list()

File "/opt/pynab/pynab/groupctl.py", line 72, in group_list

for group in groups:

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/query.py", line 2515, in __iter__

return self._execute_and_instances(context)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/query.py", line 2528, in _execute_and_instances

close_with_result=True)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/query.py", line 2519, in _connection_from_session

**kw)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/session.py", line 882, in connection

execution_options=execution_options)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/session.py", line 887, in _connection_for_bind

engine, execution_options)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/session.py", line 334, in _connection_for_bind

conn = bind.contextual_connect()

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 2034, in contextual_connect

self._wrap_pool_connect(self.pool.connect, None),

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 2073, in _wrap_pool_connect

e, dialect, self)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 1403, in _handle_dbapi_exception_noconnection

exc_info

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/compat.py", line 188, in raise_from_cause

reraise(type(exception), exception, tb=exc_tb, cause=exc_value)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/compat.py", line 181, in reraise

raise value.with_traceback(tb)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 2069, in _wrap_pool_connect

return fn()

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 376, in connect

return _ConnectionFairy._checkout(self)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 708, in _checkout

fairy = _ConnectionRecord.checkout(pool)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 480, in checkout

rec = pool._do_get()

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 1049, in _do_get

self._dec_overflow()

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/langhelpers.py", line 60, in __exit__

compat.reraise(exc_type, exc_value, exc_tb)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/compat.py", line 182, in reraise

raise value

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 1046, in _do_get

return self._create_connection()

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 323, in _create_connection

return _ConnectionRecord(self)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 449, in __init__

self.connection = self.__connect()

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 602, in __connect

connection = self.__pool._invoke_creator(self)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/strategies.py", line 97, in connect

return dialect.connect(*cargs, **cparams)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/default.py", line 377, in connect

return self.dbapi.connect(*cargs, **cparams)

File "/usr/local/lib/python3.4/dist-packages/psycopg2/__init__.py", line 164, in connect

conn = _connect(dsn, connection_factory=connection_factory, async=async)

sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not connect to server: No such file or directory

Is the server running locally and accepting

connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?

Link to comment

I just increased my docker size to 20GB from 10GB to make sure.  That didn't seem to help.  My config goes to /mnt/user/appdata/pynab, the data directory goes to /mnt/user/appdata/pynab/data.  This is where I keep all of my docker image configs.  Ill get the other logs.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.