* Use nginx for static files
* Add netbox to docker-compose
* Lint source code
* Update nginx.conf to actually work
* Build a base `branch` tag for latest of that branch
* Updated README
* Start of Vapor Netbox Module
* Add api/vapor route
* Ignore virtualenv
* Query devices assigned to users
* Add vapor/interfaces route
* adds docker-compose file to manage postgres/redis
- Initial test suite for the vapor api module. (Django tests are kind of
hard and slow)
* Init pipeline
- Adds a tox harness to run the test suite
- Test running in tox
- Clone example config to config.py
- Add the kubernetes agent + dependent services to complete tests in
the podspec
- some really hacky sed configuration on the fly
* Init Docker build
- Adds the dockerfile assets from vapor-ware/netbox-docker
- Slight changes to keep the root directory clean (nesting dirs in
docker path)
- Adds a rudimentary job label to build on micro-k8s builder
- adds docker build/publish stages to the pipeline for branch builds.
- dockerignore the project dir as its fetching packages from GHAPI
* Cleanups
* More unittests
It was hard to test with the old syntax. It was cloning the "master"
branch, so trying to test a development change was difficult.
I believe I've fixed it so that the "master" branch and "develop"
branch can use the same Dockerfile options. You override which branch
it pulls by setting a build-args variable, either via docker-compose or
in the docker build options.
To download a new version with docker, I've been running
git pull
docker-compose build --no-cache
This is slow, but no-cache is needed so that "git clone" pulls the
latest copy.
Most of the slowness comes from pulling down apt files each time a
rebuild needs to be done. If we move that into a docker image then only
the local changes need to be rebuilt.
Further refinements could be done. If the python dependencies that are
brought in from requirements.txt could be moved to an image then nothing
would change between updates as long as dependant versions hadn't
changed. This would probably be more trouble than it's worth, unless
you're recreating netbox containers 10-20 times a day.
To download a new version with docker, I've been running
git pull
docker-compose build --no-cache
This is slow, but no-cache is needed so that "git clone" pulls the
latest copy.
Most of the slowness comes from pulling down apt files each time a
rebuild needs to be done. If we move that into a docker image then only
the local changes need to be rebuilt.
Further refinements could be done. If the python dependencies that are
brought in from requirements.txt could be moved to an image then nothing
would change between updates as long as dependant versions hadn't
changed. This would probably be more trouble than it's worth, unless
you're recreating netbox containers 10-20 times a day.