Merge branch 'develop' into develop-2.8

This commit is contained in:
Jeremy Stretch 2020-03-03 13:20:00 -05:00
commit c85bcbcf31
77 changed files with 1331 additions and 565 deletions

8
.gitignore vendored
View File

@ -1,4 +1,5 @@
*.pyc
*.swp
/netbox/netbox/configuration.py
/netbox/netbox/ldap_config.py
/netbox/reports/*
@ -6,15 +7,14 @@
/netbox/scripts/*
!/netbox/scripts/__init__.py
/netbox/static
.idea
/venv/
/*.sh
!upgrade.sh
fabfile.py
*.swp
gunicorn_config.py
gunicorn.py
netbox.log
netbox.pid
.DS_Store
.vscode
.idea
.coverage
.vscode

View File

@ -26,8 +26,12 @@ or join us in the #netbox Slack channel on [NetworkToCode](https://networktocode
![Screenshot of main page](docs/media/screenshot1.png "Main page")
---
![Screenshot of rack elevation](docs/media/screenshot2.png "Rack elevation")
---
![Screenshot of prefix hierarchy](docs/media/screenshot3.png "Prefix hierarchy")
# Installation

View File

@ -58,6 +58,10 @@ djangorestframework
# https://github.com/axnsan12/drf-yasg
drf-yasg[validation]
# WSGI HTTP server
# https://gunicorn.org/
gunicorn
# Platform-agnostic template rendering engine
# https://github.com/pallets/jinja
Jinja2
@ -98,3 +102,7 @@ redis
# SVG image rendering (used for rack elevations)
# https://github.com/mozman/svgwrite
svgwrite
# Python package management tool
# https://pythonwheels.com/
wheel

View File

@ -7,12 +7,11 @@ Wants=network-online.target
[Service]
Type=simple
User=www-data
Group=www-data
User=netbox
Group=netbox
WorkingDirectory=/opt/netbox
ExecStart=/usr/bin/python3 /opt/netbox/netbox/manage.py rqworker
ExecStart=/opt/netbox/venv/bin/python3 /opt/netbox/netbox/manage.py rqworker
Restart=on-failure
RestartSec=30

View File

@ -7,12 +7,12 @@ Wants=network-online.target
[Service]
Type=simple
User=www-data
Group=www-data
User=netbox
Group=netbox
PIDFile=/var/tmp/netbox.pid
WorkingDirectory=/opt/netbox
ExecStart=/usr/local/bin/gunicorn --pid /var/tmp/netbox.pid --pythonpath /opt/netbox/netbox --config /opt/netbox/gunicorn.py netbox.wsgi
ExecStart=/opt/netbox/venv/bin/gunicorn --pid /var/tmp/netbox.pid --pythonpath /opt/netbox/netbox --config /opt/netbox/gunicorn.py netbox.wsgi
Restart=on-failure
RestartSec=30

View File

@ -27,11 +27,17 @@ class MyScript(Script):
var2 = IntegerVar(...)
var3 = ObjectVar(...)
def run(self, data):
def run(self, data, commit):
...
```
The `run()` method is passed a single argument: a dictionary containing all of the variable data passed via the web form. Your script can reference this data during execution.
The `run()` method should accept two arguments:
* `data` - A dictionary containing all of the variable data passed via the web form.
* `commit` - A boolean indicating whether database changes will be committed.
!!! note
The `commit` argument was introduced in NetBox v2.7.8. Backward compatibility is maintained for scripts which accept only the `data` argument, however moving forward scripts should accept both arguments.
Defining variables is optional: You may create a script with only a `run()` method if no user input is needed.
@ -196,7 +202,7 @@ These variables are presented as a web form to be completed by the user. Once su
```
from django.utils.text import slugify
from dcim.constants import *
from dcim.choices import DeviceStatusChoices, SiteStatusChoices
from dcim.models import Device, DeviceRole, DeviceType, Site
from extras.scripts import *
@ -222,13 +228,13 @@ class NewBranchScript(Script):
)
)
def run(self, data):
def run(self, data, commit):
# Create the new site
site = Site(
name=data['site_name'],
slug=slugify(data['site_name']),
status=SITE_STATUS_PLANNED
status=SiteStatusChoices.STATUS_PLANNED
)
site.save()
self.log_success("Created new site: {}".format(site))
@ -240,7 +246,7 @@ class NewBranchScript(Script):
device_type=data['switch_model'],
name='{}-switch{}'.format(site.slug, i),
site=site,
status=DEVICE_STATUS_PLANNED,
status=DeviceStatusChoices.STATUS_PLANNED,
device_role=switch_role
)
switch.save()

View File

@ -3,7 +3,7 @@
NetBox supports integration with the [NAPALM automation](https://napalm-automation.net/) library. NAPALM allows NetBox to fetch live data from devices and return it to a requester via its REST API.
!!! info
To enable the integration, the NAPALM library must be installed. See [installation steps](../../installation/2-netbox/#napalm-automation-optional) for more information.
To enable the integration, the NAPALM library must be installed. See [installation steps](../../installation/3-netbox/#napalm-automation-optional) for more information.
```
GET /api/dcim/devices/1/napalm/?method=get_environment

View File

@ -1,61 +1,73 @@
# Webhooks
A webhook defines an HTTP request that is sent to an external application when certain types of objects are created, updated, and/or deleted in NetBox. When a webhook is triggered, a POST request is sent to its configured URL. This request will include a full representation of the object being modified for consumption by the receiver. Webhooks are configured via the admin UI under Extras > Webhooks.
A webhook is a mechanism for conveying to some external system a change that took place in NetBox. For example, you may want to notify a monitoring system whenever a device status is changed in NetBox. This can be done by creating a webhook for the device model in NetBox. When NetBox detects a change to a device, an HTTP request containing the details of the change and who made it be sent to the specified receiver. Webhooks are configured in the admin UI under Extras > Webhooks.
An optional secret key can be configured for each webhook. This will append a `X-Hook-Signature` header to the request, consisting of a HMAC (SHA-512) hex digest of the request body using the secret as the key. This digest can be used by the receiver to authenticate the request's content.
## Configuration
## Requests
* **Name** - A unique name for the webhook. The name is not included with outbound messages.
* **Object type(s)** - The type or types of NetBox object that will trigger the webhook.
* **Enabled** - If unchecked, the webhook will be inactive.
* **Events** - A webhook may trigger on any combination of create, update, and delete events. At least one event type must be selected.
* **HTTP method** - The type of HTTP request to send. Options include GET, POST, PUT, PATCH, and DELETE.
* **URL** - The fuly-qualified URL of the request to be sent. This may specify a destination port number if needed.
* **HTTP content type** - The value of the request's `Content-Type` header. (Defaults to `application/json`)
* **Additional headers** - Any additional headers to include with the request (optional). Add one header per line in the format `Name: Value`. Jinja2 templating is supported for this field (see below).
* **Body template** - The content of the request being sent (optional). Jinja2 templating is supported for this field (see below). If blank, NetBox will populate the request body with a raw dump of the webhook context. (If the HTTP cotent type is set to `application/json`, this will be formatted as a JSON object.)
* **Secret** - A secret string used to prove authenticity of the request (optional). This will append a `X-Hook-Signature` header to the request, consisting of a HMAC (SHA-512) hex digest of the request body using the secret as the key.
* **SSL verification** - Uncheck this option to disable validation of the receiver's SSL certificate. (Disable with caution!)
* **CA file path** - The file path to a particular certificate authority (CA) file to use when validating the receiver's SSL certificate (optional).
The webhook POST request is structured as so (assuming `application/json` as the Content-Type):
## Jinja2 Template Support
[Jinja2 templating](https://jinja.palletsprojects.com/) is supported for the `additional_headers` and `body_template` fields. This enables the user to convey change data in the request headers as well as to craft a customized request body. Request content can be crafted to enable the direct interaction with external systems by ensuring the outgoing message is in a format the receiver expects and understands.
For example, you might create a NetBox webhook to [trigger a Slack message](https://api.slack.com/messaging/webhooks) any time an IP address is created. You can accomplish this using the following configuration:
* Object type: IPAM > IP address
* HTTP method: POST
* URL: <Slack incoming webhook URL>
* HTTP content type: `application/json`
* Body template: `{"text": "IP address {{ data['address'] }} was created by {{ username }}!"}`
### Available Context
The following data is available as context for Jinja2 templates:
* `event` - The type of event which triggered the webhook: created, updated, or deleted.
* `model` - The NetBox model which triggered the change.
* `timestamp` - The time at which the event occurred (in [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) format).
* `username` - The name of the user account associated with the change.
* `request_id` - The unique request ID. This may be used to correlate multiple changes associated with a single request.
* `data` - A serialized representation of the object _after_ the change was made. This is typically equivalent to the model's representation in NetBox's REST API.
### Default Request Body
If no body template is specified, the request body will be populated with a JSON object containing the context data. For example, a newly created site might appear as follows:
```no-highlight
{
"event": "created",
"timestamp": "2019-10-12 12:51:29.746944",
"username": "admin",
"timestamp": "2020-02-25 15:10:26.010582+00:00",
"model": "site",
"request_id": "43d8e212-94c7-4f67-b544-0dcde4fc0f43",
"username": "jstretch",
"request_id": "fdbca812-3142-4783-b364-2e2bd5c16c6a",
"data": {
"id": 19,
"name": "Site 1",
"slug": "site-1",
"status":
"value": "active",
"label": "Active",
"id": 1
},
"region": null,
...
}
}
```
`data` is the serialized representation of the model instance(s) from the event. The same serializers from the NetBox API are used. So an example of the payload for a Site delete event would be:
## Webhook Processing
```no-highlight
{
"event": "deleted",
"timestamp": "2019-10-12 12:55:44.030750",
"username": "johnsmith",
"model": "site",
"request_id": "e9bb83b2-ebe4-4346-b13f-07144b1a00b4",
"data": {
"asn": None,
"comments": "",
"contact_email": "",
"contact_name": "",
"contact_phone": "",
"count_circuits": 0,
"count_devices": 0,
"count_prefixes": 0,
"count_racks": 0,
"count_vlans": 0,
"custom_fields": {},
"facility": "",
"id": 54,
"name": "test",
"physical_address": "",
"region": None,
"shipping_address": "",
"slug": "test",
"tenant": None
}
}
```
When a change is detected, any resulting webhooks are placed into a Redis queue for processing. This allows the user's request to complete without needing to wait for the outgoing webhook(s) to be processed. The webhooks are then extracted from the queue by the `rqworker` process and HTTP requests are sent to their respective destinations. The current webhook queue and any failed webhooks can be inspected in the admin UI under Django RQ > Queues.
A request is considered successful if the response status code is any one of a list of "good" statuses defined in the [requests library](https://github.com/requests/requests/blob/205755834d34a8a6ecf2b0b5b2e9c3e6a7f4e4b6/requests/models.py#L688), otherwise the request is marked as having failed. The user may manually retry a failed request.
## Backend Status
Django-rq includes a status page in the admin site which can be used to view the result of processed webhooks and manually retry any failed webhooks. Access it from http://netbox.local/admin/webhook-backend-status/.
A request is considered successful if the response has a 2XX status code; otherwise, the request is marked as having failed. Failed requests may be retried manually via the admin UI.

View File

@ -1,14 +1,13 @@
NetBox requires a PostgreSQL database to store data. This can be hosted locally or on a remote server. (Please note that MySQL is not supported, as NetBox leverages PostgreSQL's built-in [network address types](https://www.postgresql.org/docs/current/static/datatype-net-types.html).)
!!! note
The installation instructions provided here have been tested to work on Ubuntu 18.04 and CentOS 7.5. The particular commands needed to install dependencies on other distributions may vary significantly. Unfortunately, this is outside the control of the NetBox maintainers. Please consult your distribution's documentation for assistance with any errors.
This section entails the installation and configuration of a local PostgreSQL database. If you already have a PostgreSQL database service in place, skip to [the next section](2-redis.md).
!!! warning
NetBox requires PostgreSQL 9.4 or higher.
NetBox requires PostgreSQL 9.4 or higher. Please note that MySQL and other relational databases are **not** supported.
# Installation
The installation instructions provided here have been tested to work on Ubuntu 18.04 and CentOS 7.5. The particular commands needed to install dependencies on other distributions may vary significantly. Unfortunately, this is outside the control of the NetBox maintainers. Please consult your distribution's documentation for assistance with any errors.
**Ubuntu**
## Installation
#### Ubuntu
If a recent enough version of PostgreSQL is not available through your distribution's package manager, you'll need to install it from an official [PostgreSQL repository](https://wiki.postgresql.org/wiki/Apt).
@ -17,13 +16,13 @@ If a recent enough version of PostgreSQL is not available through your distribut
# apt-get install -y postgresql libpq-dev
```
**CentOS**
#### CentOS
CentOS 7.5 does not ship with a recent enough version of PostgreSQL, so it will need to be installed from an external repository. The instructions below show the installation of PostgreSQL 9.6.
```no-highlight
# yum install https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-7-x86_64/pgdg-centos96-9.6-3.noarch.rpm
# yum install postgresql96 postgresql96-server postgresql96-devel
# yum install -y https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-7-x86_64/pgdg-centos96-9.6-3.noarch.rpm
# yum install -y postgresql96 postgresql96-server postgresql96-devel
# /usr/pgsql-9.6/bin/postgresql96-setup initdb
```
@ -41,7 +40,7 @@ Then, start the service and enable it to run at boot:
# systemctl enable postgresql-9.6
```
# Database Creation
## Database Creation
At a minimum, we need to create a database for NetBox and assign it a username and password for authentication. This is done with the following commands.
@ -62,6 +61,8 @@ GRANT
postgres=# \q
```
## Verify Service Status
You can verify that authentication works issuing the following command and providing the configured password. (Replace `localhost` with your database server if using a remote database.)
```no-highlight

View File

@ -0,0 +1,27 @@
[Redis](https://redis.io/) is an in-memory key-value store which NetBox employs for caching and queuing. This section entails the installation and configuration of a local Redis instance. If you already have a Redis service in place, skip to [the next section](3-netbox.md).
#### Ubuntu
```no-highlight
# apt-get install -y redis-server
```
#### CentOS
```no-highlight
# yum install -y epel-release
# yum install -y redis
# systemctl start redis
# systemctl enable redis
```
You may wish to modify the Redis configuration at `/etc/redis.conf` or `/etc/redis/redis.conf`, however in most cases the default configuration is sufficient.
## Verify Service Status
Use the `redis-cli` utility to ensure the Redis service is functional:
```no-highlight
$ redis-cli ping
PONG
```

View File

@ -1,25 +1,25 @@
# Installation
This section of the documentation discusses installing and configuring the NetBox application. Begin by installing all system packages required by NetBox and its dependencies:
**Ubuntu**
## Install System Packages
#### Ubuntu
```no-highlight
# apt-get install -y python3 python3-pip python3-dev build-essential libxml2-dev libxslt1-dev libffi-dev libpq-dev libssl-dev redis-server zlib1g-dev
# apt-get install -y python3 python3-pip python3-venv python3-dev build-essential libxml2-dev libxslt1-dev libffi-dev libpq-dev libssl-dev zlib1g-dev
```
**CentOS**
#### CentOS
```no-highlight
# yum install -y epel-release
# yum install -y gcc python36 python36-devel python36-setuptools libxml2-devel libxslt-devel libffi-devel openssl-devel redhat-rpm-config redis
# yum install -y gcc python36 python36-devel python36-setuptools libxml2-devel libxslt-devel libffi-devel openssl-devel redhat-rpm-config
# easy_install-3.6 pip
# ln -s /usr/bin/python3.6 /usr/bin/python3
```
## Download NetBox
You may opt to install NetBox either from a numbered release or by cloning the master branch of its repository on GitHub.
## Option A: Download a Release
### Option A: Download a Release
Download the [latest stable release](https://github.com/netbox-community/netbox/releases) from GitHub as a tarball or ZIP archive and extract it to your desired path. In this example, we'll use `/opt/netbox`.
@ -31,7 +31,7 @@ Download the [latest stable release](https://github.com/netbox-community/netbox/
# cd /opt/netbox/
```
## Option B: Clone the Git Repository
### Option B: Clone the Git Repository
Create the base directory for the NetBox installation. For this guide, we'll use `/opt/netbox`.
@ -41,13 +41,13 @@ Create the base directory for the NetBox installation. For this guide, we'll use
If `git` is not already installed, install it:
**Ubuntu**
#### Ubuntu
```no-highlight
# apt-get install -y git
```
**CentOS**
#### CentOS
```no-highlight
# yum install -y git
@ -66,45 +66,56 @@ Resolving deltas: 100% (1495/1495), done.
Checking connectivity... done.
```
!!! warning
Ensure that the media directory (`/opt/netbox/netbox/media/` in this example) and all its subdirectories are writable by the user account as which NetBox runs. If the NetBox process does not have permission to write to this directory, attempts to upload files (e.g. image attachments) will fail. (The appropriate user account will vary by platform.)
## Create the NetBox User
`# chown -R netbox:netbox /opt/netbox/netbox/media/`
# Install Python Packages
Install the required Python packages using pip. (If you encounter any compilation errors during this step, ensure that you've installed all of the system dependencies listed above.)
```no-highlight
# pip3 install -r requirements.txt
```
Create a system user account named `netbox`. We'll configure the WSGI and HTTP services to run under this account. We'll also assign this user ownership of the media directory. This ensures that NetBox will be able to save local files.
!!! note
If you encounter errors while installing the required packages, check that you're running a recent version of pip (v9.0.1 or higher) with the command `pip3 -V`.
CentOS users may need to create the `netbox` group first.
## NAPALM Automation (Optional)
NetBox supports integration with the [NAPALM automation](https://napalm-automation.net/) library. NAPALM allows NetBox to fetch live data from devices and return it to a requester via its REST API. Installation of NAPALM is optional. To enable it, install the `napalm` package using pip or pip3:
```no-highlight
# pip3 install napalm
```
# adduser --system --group netbox
# chown --recursive netbox /opt/netbox/netbox/media/
```
## Remote File Storage (Optional)
## Set Up Python Environment
We'll use a Python [virtual environment](https://docs.python.org/3.6/tutorial/venv.html) to ensure NetBox's required packages don't conflict with anything in the base system. This will create a directory named `venv` in our NetBox root.
```no-highlight
# python3 -m venv /opt/netbox/venv
```
Next, activate the virtual environment and install the required Python packages. You should see your console prompt change to indicate the active environment. (Activating the virtual environment updates your command shell to use the local copy of Python that we just installed for NetBox instead of the system's Python interpreter.)
```no-highlight
# source venv/bin/activate
(venv) # pip3 install -r requirements.txt
```
#### NAPALM Automation (Optional)
NetBox supports integration with the [NAPALM automation](https://napalm-automation.net/) library. NAPALM allows NetBox to fetch live data from devices and return it to a requester via its REST API. Installation of NAPALM is optional. To enable it, install the `napalm` package:
```no-highlight
(venv) # pip3 install napalm
```
#### Remote File Storage (Optional)
By default, NetBox will use the local filesystem to storage uploaded files. To use a remote filesystem, install the [`django-storages`](https://django-storages.readthedocs.io/en/stable/) library and configure your [desired backend](../../configuration/optional-settings/#storage_backend) in `configuration.py`.
```no-highlight
# pip3 install django-storages
(venv) # pip3 install django-storages
```
# Configuration
## Configuration
Move into the NetBox configuration directory and make a copy of `configuration.example.py` named `configuration.py`.
```no-highlight
# cd netbox/netbox/
# cp configuration.example.py configuration.py
(venv) # cd netbox/netbox/
(venv) # cp configuration.example.py configuration.py
```
Open `configuration.py` with your preferred editor and set the following variables:
@ -114,7 +125,7 @@ Open `configuration.py` with your preferred editor and set the following variabl
* `REDIS`
* `SECRET_KEY`
## ALLOWED_HOSTS
### ALLOWED_HOSTS
This is a list of the valid hostnames by which this server can be reached. You must specify at least one name or IP address.
@ -124,7 +135,7 @@ Example:
ALLOWED_HOSTS = ['netbox.example.com', '192.0.2.123']
```
## DATABASE
### DATABASE
This parameter holds the database configuration details. You must define the username and password used when you configured PostgreSQL. If the service is running on a remote host, replace `localhost` with its address. See the [configuration documentation](../../configuration/required-settings/#database) for more detail on individual parameters.
@ -141,7 +152,7 @@ DATABASE = {
}
```
## REDIS
### REDIS
Redis is a in-memory key-value store required as part of the NetBox installation. It is used for features such as webhooks and caching. Redis typically requires minimal configuration; the values below should suffice for most installations. See the [configuration documentation](../../configuration/required-settings/#redis) for more detail on individual parameters.
@ -166,7 +177,7 @@ REDIS = {
}
```
## SECRET_KEY
### SECRET_KEY
Generate a random secret key of at least 50 alphanumeric characters. This key must be unique to this installation and must not be shared outside the local system.
@ -175,13 +186,13 @@ You may use the script located at `netbox/generate_secret_key.py` to generate a
!!! note
In the case of a highly available installation with multiple web servers, `SECRET_KEY` must be identical among all servers in order to maintain a persistent user session state.
# Run Database Migrations
## Run Database Migrations
Before NetBox can run, we need to install the database schema. This is done by running `python3 manage.py migrate` from the `netbox` directory (`/opt/netbox/netbox/` in our example):
```no-highlight
# cd /opt/netbox/netbox/
# python3 manage.py migrate
(venv) # cd /opt/netbox/netbox/
(venv) # python3 manage.py migrate
Operations to perform:
Apply all migrations: dcim, sessions, admin, ipam, utilities, auth, circuits, contenttypes, extras, secrets, users
Running migrations:
@ -194,12 +205,12 @@ Running migrations:
If this step results in a PostgreSQL authentication error, ensure that the username and password created in the database match what has been specified in `configuration.py`
# Create a Super User
## Create a Super User
NetBox does not come with any predefined user accounts. You'll need to create a super user to be able to log into NetBox:
```no-highlight
# python3 manage.py createsuperuser
(venv) # python3 manage.py createsuperuser
Username: admin
Email address: admin@example.com
Password:
@ -207,20 +218,20 @@ Password (again):
Superuser created successfully.
```
# Collect Static Files
## Collect Static Files
```no-highlight
# python3 manage.py collectstatic --no-input
(venv) # python3 manage.py collectstatic --no-input
959 static files copied to '/opt/netbox/netbox/static'.
```
# Test the Application
## Test the Application
At this point, NetBox should be able to run. We can verify this by starting a development instance:
```no-highlight
# python3 manage.py runserver 0.0.0.0:8000 --insecure
(venv) # python3 manage.py runserver 0.0.0.0:8000 --insecure
Performing system checks...
System check identified no issues (0 silenced).

View File

@ -3,9 +3,9 @@ We'll set up a simple WSGI front end using [gunicorn](http://gunicorn.org/) for
!!! info
For the sake of brevity, only Ubuntu 18.04 instructions are provided here, but this sort of web server and WSGI configuration is not unique to NetBox. Please consult your distribution's documentation for assistance if needed.
# Web Server Installation
## HTTP Daemon Installation
## Option A: nginx
### Option A: nginx
The following will serve as a minimal nginx configuration. Be sure to modify your server name and installation path appropriately.
@ -52,7 +52,7 @@ Restart the nginx service to use the new configuration.
To enable SSL, consider this guide on [securing nginx with Let's Encrypt](https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-16-04).
## Option B: Apache
### Option B: Apache
```no-highlight
# apt-get install -y apache2 libapache2-mod-wsgi-py3
@ -99,15 +99,12 @@ Save the contents of the above example in `/etc/apache2/sites-available/netbox.c
To enable SSL, consider this guide on [securing Apache with Let's Encrypt](https://www.digitalocean.com/community/tutorials/how-to-secure-apache-with-let-s-encrypt-on-ubuntu-16-04).
# gunicorn Installation
!!! note
Certain components of NetBox (such as the display of rack elevation diagrams) rely on the use of embedded objects. Ensure that your HTTP server configuration does not override the `X-Frame-Options` response header set by NetBox.
Install gunicorn:
## gunicorn Configuration
```no-highlight
# pip3 install gunicorn
```
Copy `/opt/netbox/contrib/gunicorn.py` to `/opt/netbox/gunicorn.py`. We make a copy of this file to ensure that any changes to it do not get overwritten by a future upgrade.
Copy `/opt/netbox/contrib/gunicorn.py` to `/opt/netbox/gunicorn.py`. (We make a copy of this file to ensure that any changes to it do not get overwritten by a future upgrade.)
```no-highlight
# cd /opt/netbox
@ -116,7 +113,7 @@ Copy `/opt/netbox/contrib/gunicorn.py` to `/opt/netbox/gunicorn.py`. We make a c
You may wish to edit this file to change the bound IP address or port number, or to make performance-related adjustments.
# systemd configuration
## systemd Configuration
We'll use systemd to control the daemonization of NetBox services. First, copy `contrib/netbox.service` and `contrib/netbox-rq.service` to the `/etc/systemd/system/` directory:
@ -124,17 +121,12 @@ We'll use systemd to control the daemonization of NetBox services. First, copy `
# cp contrib/*.service /etc/systemd/system/
```
!!! note
These service files assume that gunicorn is installed at `/usr/local/bin/gunicorn`. If the output of `which gunicorn` indicates a different path, you'll need to correct the `ExecStart` path in both files.
Then, start the `netbox` and `netbox-rq` services and enable them to initiate at boot time:
```no-highlight
# systemctl daemon-reload
# systemctl start netbox.service
# systemctl start netbox-rq.service
# systemctl enable netbox.service
# systemctl enable netbox-rq.service
# systemctl start netbox netbox-rq
# systemctl enable netbox netbox-rq
```
You can use the command `systemctl status netbox` to verify that the WSGI service is running:
@ -154,7 +146,20 @@ You can use the command `systemctl status netbox` to verify that the WSGI servic
...
```
At this point, you should be able to connect to the HTTP service at the server name or IP address you provided. If you are unable to connect, check that the nginx service is running and properly configured. If you receive a 502 (bad gateway) error, this indicates that gunicorn is misconfigured or not running.
At this point, you should be able to connect to the HTTP service at the server name or IP address you provided.
!!! info
Please keep in mind that the configurations provided here are bare minimums required to get NetBox up and running. You may want to make adjustments to better suit your production environment.
## Troubleshooting
If you are unable to connect to the HTTP server, check that:
* Nginx/Apache is running and configured to listen on the correct port.
* Access is not being blocked by a firewall. (Try connecting locally from the server itself.)
If you are able to connect but receive a 502 (bad gateway) error, check the following:
* The NetBox system process (gunicorn) is running: `systemctl status netbox`
* nginx/Apache is configured to connect to the port on which gunicorn is listening (default is 8001).
* SELinux is not preventing the reverse proxy connection. You may need to allow HTTP network connections with the command `setsebool -P httpd_can_network_connect 1`

View File

@ -1,8 +1,8 @@
This guide explains how to implement LDAP authentication using an external server. User authentication will fall back to built-in Django users in the event of a failure.
# Requirements
## Install Requirements
## Install openldap-devel
#### Install openldap-devel
On Ubuntu:
@ -16,17 +16,17 @@ On CentOS:
sudo yum install -y openldap-devel
```
## Install django-auth-ldap
#### Install django-auth-ldap
```no-highlight
pip3 install django-auth-ldap
```
# Configuration
## Configuration
Create a file in the same directory as `configuration.py` (typically `netbox/netbox/`) named `ldap_config.py`. Define all of the parameters required below in `ldap_config.py`. Complete documentation of all `django-auth-ldap` configuration options is included in the project's [official documentation](http://django-auth-ldap.readthedocs.io/).
## General Server Configuration
### General Server Configuration
!!! info
When using Windows Server 2012 you may need to specify a port on `AUTH_LDAP_SERVER_URI`. Use `3269` for secure, or `3268` for non-secure.
@ -54,7 +54,7 @@ LDAP_IGNORE_CERT_ERRORS = True
STARTTLS can be configured by setting `AUTH_LDAP_START_TLS = True` and using the `ldap://` URI scheme.
## User Authentication
### User Authentication
!!! info
When using Windows Server 2012, `AUTH_LDAP_USER_DN_TEMPLATE` should be set to None.
@ -79,7 +79,7 @@ AUTH_LDAP_USER_ATTR_MAP = {
}
```
# User Groups for Permissions
## User Groups for Permissions
!!! info
When using Microsoft Active Directory, support for nested groups can be activated by using `NestedGroupOfNamesType()` instead of `GroupOfNamesType()` for `AUTH_LDAP_GROUP_TYPE`. You will also need to modify the import line to use `NestedGroupOfNamesType` instead of `GroupOfNamesType` .
@ -121,7 +121,7 @@ AUTH_LDAP_CACHE_TIMEOUT = 3600
!!! warning
Authentication will fail if the groups (the distinguished names) do not exist in the LDAP directory.
# Troubleshooting LDAP
## Troubleshooting LDAP
`supervisorctl restart netbox` restarts the Netbox service, and initiates any changes made to `ldap_config.py`. If there are syntax errors present, the NetBox process will not spawn an instance, and errors should be logged to `/var/log/supervisor/`.

View File

@ -3,14 +3,13 @@
The following sections detail how to set up a new instance of NetBox:
1. [PostgreSQL database](1-postgresql.md)
2. [NetBox components](2-netbox.md)
3. [HTTP daemon](3-http-daemon.md)
4. [LDAP authentication](4-ldap.md) (optional)
1. [Redis](2-redis.md)
3. [NetBox components](3-netbox.md)
4. [HTTP daemon](4-http-daemon.md)
5. [LDAP authentication](5-ldap.md) (optional)
# Upgrading
If you are upgrading from an existing installation, please consult the [upgrading guide](upgrading.md).
NetBox v2.5 and later requires Python 3.5 or higher. Please see the instructions for [migrating to Python 3](migrating-to-python3.md) if you are still using Python 2.
Netbox v2.5.9 and later moved to using systemd instead of supervisord. Please see the instructions for [migrating to systemd](migrating-to-systemd.md) if you are still using supervisord.

View File

@ -1,38 +0,0 @@
# Migration
!!! warning
As of version 2.5, NetBox no longer supports Python 2. Python 3 is required to run any 2.5 release or later.
## Ubuntu
Remove the Python2 version of gunicorn:
```no-highlight
# pip uninstall -y gunicorn
```
Install Python3 and pip3, Python's package management tool:
```no-highlight
# apt-get update
# apt-get install -y python3 python3-dev python3-setuptools
# easy_install3 pip
```
Install the Python3 packages required by NetBox:
```no-highlight
# pip3 install -r requirements.txt
```
Replace gunicorn with the Python3 version:
```no-highlight
# pip3 install gunicorn
```
If using LDAP authentication, install the `django-auth-ldap` package:
```no-highlight
# pip3 install django-auth-ldap
```

View File

@ -1,16 +1,17 @@
# Migration
Migration is not required, as supervisord will still continue to function.
This document contains instructions for migrating from a legacy NetBox deployment using [supervisor](http://supervisord.org/) to a systemd-based approach.
## Ubuntu
### Remove supervisord:
### Uninstall supervisord:
```no-highlight
# apt-get remove -y supervisord
```
### systemd configuration:
### Configure systemd:
!!! note
These instructions assume the presence of a Python virtual environment at `/opt/netbox/venv`. If you have not created this environment, please refer to the [installation instructions](3-netbox.md#set-up-python-environment) for direction.
We'll use systemd to control the daemonization of NetBox services. First, copy `contrib/netbox.service` and `contrib/netbox-rq.service` to the `/etc/systemd/system/` directory:
@ -19,19 +20,14 @@ We'll use systemd to control the daemonization of NetBox services. First, copy `
```
!!! note
These service files assume that gunicorn is installed at `/usr/local/bin/gunicorn`. If the output of `which gunicorn` indicates a different path, you'll need to correct the `ExecStart` path in both files.
!!! note
You may need to modify the user that the systemd service runs as. Please verify the user for httpd on your specific release and edit both files to match your httpd service under user and group. The username could be "nobody", "nginx", "apache", "www-data" or any number of other usernames.
You may need to modify the user that the systemd service runs as. Please verify the user for httpd on your specific release and edit both files to match your httpd service under user and group. The username could be "nobody", "nginx", "apache", "www-data", or something else.
Then, start the `netbox` and `netbox-rq` services and enable them to initiate at boot time:
```no-highlight
# systemctl daemon-reload
# systemctl start netbox.service
# systemctl start netbox-rq.service
# systemctl enable netbox.service
# systemctl enable netbox-rq.service
# systemctl start netbox netbox-rq
# systemctl enable netbox netbox-rq
```
You can use the command `systemctl status netbox` to verify that the WSGI service is running:
@ -51,7 +47,7 @@ You can use the command `systemctl status netbox` to verify that the WSGI servic
...
```
At this point, you should be able to connect to the HTTP service at the server name or IP address you provided. If you are unable to connect, check that the nginx service is running and properly configured. If you receive a 502 (bad gateway) error, this indicates that gunicorn is misconfigured or not running.
At this point, you should be able to connect to the HTTP service at the server name or IP address you provided. If you are unable to connect, check that the nginx service is running and properly configured. If you receive a 502 (bad gateway) error, this indicates that gunicorn is misconfigured or not running. Issue the command `journalctl -xe` to see why the services were unable to start.
!!! info
Please keep in mind that the configurations provided here are bare minimums required to get NetBox up and running. You may want to make adjustments to better suit your production environment.

View File

@ -1,12 +1,12 @@
# Review the Release Notes
## Review the Release Notes
Prior to upgrading your NetBox instance, be sure to carefully review all [release notes](../../release-notes/) that have been published since your current version was released. Although the upgrade process typically does not involve additional work, certain releases may introduce breaking or backward-incompatible changes. These are called out in the release notes under the version in which the change went into effect.
# Install the Latest Code
## Install the Latest Code
As with the initial installation, you can upgrade NetBox by either downloading the latest release package or by cloning the `master` branch of the git repository.
## Option A: Download a Release
### Option A: Download a Release
Download the [latest stable release](https://github.com/netbox-community/netbox/releases) from GitHub as a tarball or ZIP archive. Extract it to your desired path. In this example, we'll use `/opt/netbox`.
@ -34,7 +34,7 @@ Be sure to replicate your uploaded media as well. (The exact action necessary wi
Also make sure to copy over any reports that you've made. Note that if you made them in a separate directory (`/opt/netbox-reports` for example), then you will not need to copy them - the config file that you copied earlier will point to the correct location.
```no-highlight
# cp -r /opt/netbox-X.Y.X/netbox/reports /opt/netbox/netbox/reports/
# cp -r /opt/netbox-X.Y.Z/netbox/reports /opt/netbox/netbox/reports/
```
If you followed the original installation guide to set up gunicorn, be sure to copy its configuration as well:
@ -49,7 +49,7 @@ Copy the LDAP configuration if using LDAP:
# cp netbox-X.Y.Z/netbox/netbox/ldap_config.py netbox/netbox/netbox/ldap_config.py
```
## Option B: Clone the Git Repository (latest master release)
### Option B: Clone the Git Repository (latest master release)
This guide assumes that NetBox is installed at `/opt/netbox`. Pull down the most recent iteration of the master branch:
@ -60,9 +60,9 @@ This guide assumes that NetBox is installed at `/opt/netbox`. Pull down the most
# git status
```
# Run the Upgrade Script
## Run the Upgrade Script
Once the new code is in place, run the upgrade script (which may need to be run as root depending on how your environment is configured).
Once the new code is in place, run the upgrade script:
```no-highlight
# ./upgrade.sh
@ -70,7 +70,8 @@ Once the new code is in place, run the upgrade script (which may need to be run
This script:
* Installs or upgrades any new required Python packages
* Destroys and rebuilds the Python virtual environment
* Installs all required Python packages
* Applies any database migrations that were included in the release
* Collects all static files to be served by the HTTP service
@ -82,14 +83,16 @@ This script:
This may occur due to semantic differences in environment, and can be safely ignored. Never attempt to create new migrations unless you are intentionally modifying the database schema.
# Restart the WSGI Service
## Restart the NetBox Services
Finally, restart the WSGI services to run the new code. If you followed this guide for the initial installation, this is done using `systemctl:
!!! warning
If you are upgrading from an installation that does not use a Python virtual environment, you'll need to update the systemd service files to reference the new Python and gunicorn executables before restarting the services. These are located in `/opt/netbox/venv/bin/`. See the example service files in `/opt/netbox/contrib/` for reference.
Finally, restart the gunicorn and RQ services:
```no-highlight
# sudo systemctl restart netbox
# sudo systemctl restart netbox-rq
# sudo systemctl restart netbox netbox-rq
```
!!! note
It's possible you are still using supervisord instead of the linux native systemd. If you are still using supervisord you can restart the services by either restarting supervisord or by using supervisorctl to restart netbox.
It's possible you are still using supervisord instead of systemd. If so, please see the instructions for [migrating to systemd](migrating-to-systemd.md).

Binary file not shown.

Before

Width:  |  Height:  |  Size: 98 KiB

After

Width:  |  Height:  |  Size: 336 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 134 KiB

After

Width:  |  Height:  |  Size: 336 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 112 KiB

After

Width:  |  Height:  |  Size: 339 KiB

View File

@ -1,11 +1,55 @@
# v2.7.8 (FUTURE)
# v2.7.9 (FUTURE)
**Note:** This release will deploy a Python virtual environment on upgrade in the `venv/` directory. This will require modifying the paths to your Python and gunicorn executables in the systemd service files. For more detail, please see the [upgrade instructions](https://netbox.readthedocs.io/en/stable/installation/upgrading/).
## Enhancements
* [#3949](https://github.com/netbox-community/netbox/issues/3949) - Revised the installation docs and upgrade script to employ a Python virtual environment
* [#4218](https://github.com/netbox-community/netbox/issues/4218) - Allow negative voltage for DC power feeds
* [#4281](https://github.com/netbox-community/netbox/issues/4281) - Allow filtering device component list views by type
* [#4284](https://github.com/netbox-community/netbox/issues/4284) - Add MRJ21 port and cable types
* [#4290](https://github.com/netbox-community/netbox/issues/4290) - Include device name in tooltip on rack elevations
* [#4305](https://github.com/netbox-community/netbox/issues/4305) - Add 10-inch option for rack width
## Bug Fixes
* [#4224](https://github.com/netbox-community/netbox/issues/4224) - Fix display of rear device image if front image is not defined
* [#4274](https://github.com/netbox-community/netbox/issues/4274) - Fix incorrect schema definition of `int` type choicefields
* [#4277](https://github.com/netbox-community/netbox/issues/4277) - Fix filtering of clusters by tenant
* [#4282](https://github.com/netbox-community/netbox/issues/4282) - Fix label on export button for device types
* [#4285](https://github.com/netbox-community/netbox/issues/4285) - Include A/Z termination sites in provider circuits table
* [#4295](https://github.com/netbox-community/netbox/issues/4295) - Fix assignment of parent LAG during interface bulk edit
* [#4300](https://github.com/netbox-community/netbox/issues/4300) - Pass "commit" argument when executing scripts via REST API
* [#4301](https://github.com/netbox-community/netbox/issues/4301) - Fix exception when deleting device type with components
* [#4306](https://github.com/netbox-community/netbox/issues/4306) - Fix toggling of device images for all racks in elevations view
---
# v2.7.8 (2020-02-25)
## Enhancements
* [#3145](https://github.com/netbox-community/netbox/issues/3145) - Add a "decommissioning" cable status
* [#4173](https://github.com/netbox-community/netbox/issues/4173) - Return graceful error message when webhook queuing fails
* [#4227](https://github.com/netbox-community/netbox/issues/4227) - Omit internal fields from the change log data
* [#4237](https://github.com/netbox-community/netbox/issues/4237) - Support Jinja2 templating for webhook payload and headers
* [#4262](https://github.com/netbox-community/netbox/issues/4262) - Extend custom scripts to pass the `commit` value via `run()`
* [#4267](https://github.com/netbox-community/netbox/issues/4267) - Denote rack role on rack elevations list
## Bug Fixes
* [#4221](https://github.com/netbox-community/netbox/issues/4221) - Fix exception when deleting a device with interface connections when an interfaces webhook is defined
* [#4222](https://github.com/netbox-community/netbox/issues/4222) - Escape double quotes on encapsulated values during CSV export
* [#4224](https://github.com/netbox-community/netbox/issues/4224) - Fix display of rear device image if front image is not defined
* [#4228](https://github.com/netbox-community/netbox/issues/4228) - Improve fit of device images in rack elevations
* [#4230](https://github.com/netbox-community/netbox/issues/4230) - Fix rack units filtering on elevation endpoint
* [#4232](https://github.com/netbox-community/netbox/issues/4232) - Enforce consistent background striping in rack elevations
* [#4235](https://github.com/netbox-community/netbox/issues/4235) - Fix API representation of `content_type` for export templates
* [#4239](https://github.com/netbox-community/netbox/issues/4239) - Fix exception when selecting all filtered objects during bulk edit
* [#4240](https://github.com/netbox-community/netbox/issues/4240) - Fix exception when filtering foreign keys by NULL
* [#4241](https://github.com/netbox-community/netbox/issues/4241) - Correct IP address hyperlinks on interface view
* [#4246](https://github.com/netbox-community/netbox/issues/4246) - Fix duplication of field attributes when multiple IPNetworkVars are present in a script
* [#4252](https://github.com/netbox-community/netbox/issues/4252) - Fix power port assignment for power outlet templates created via REST API
* [#4272](https://github.com/netbox-community/netbox/issues/4272) - Interface type should be required by API serializer
---
@ -16,7 +60,7 @@ NetBox, run the following management command to recalculate their naturalized va
```
python3 manage.py renaturalize dcim.Interface
```
```
## Enhancements

View File

@ -1,17 +1,22 @@
site_name: NetBox
theme: readthedocs
site_name: NetBox Documentation
site_url: https://netbox.readthedocs.io/
repo_url: https://github.com/netbox-community/netbox
theme:
name: readthedocs
navigation_depth: 3
markdown_extensions:
- admonition:
pages:
nav:
- Introduction: 'index.md'
- Installation:
- Installing NetBox: 'installation/index.md'
- 1. PostgreSQL: 'installation/1-postgresql.md'
- 2. NetBox: 'installation/2-netbox.md'
- 3. HTTP Daemon: 'installation/3-http-daemon.md'
- 4. LDAP (Optional): 'installation/4-ldap.md'
- 2. Redis: 'installation/2-redis.md'
- 3. NetBox: 'installation/3-netbox.md'
- 4. HTTP Daemon: 'installation/4-http-daemon.md'
- 5. LDAP (Optional): 'installation/5-ldap.md'
- Upgrading NetBox: 'installation/upgrading.md'
- Migrating to Python3: 'installation/migrating-to-python3.md'
- Migrating to systemd: 'installation/migrating-to-systemd.md'
- Configuration:
- Configuring NetBox: 'configuration/index.md'
@ -76,6 +81,3 @@ pages:
- Version 1.2: 'release-notes/version-1.2.md'
- Version 1.1: 'release-notes/version-1.1.md'
- Version 1.0: 'release-notes/version-1.0.md'
markdown_extensions:
- admonition:

View File

@ -10,6 +10,7 @@ from extras.models import CustomFieldModel, ObjectChange, TaggedItem
from utilities.models import ChangeLoggedModel
from utilities.utils import serialize_object
from .choices import *
from .querysets import CircuitQuerySet
__all__ = (
@ -184,6 +185,7 @@ class Circuit(ChangeLoggedModel, CustomFieldModel):
object_id_field='obj_id'
)
objects = CircuitQuerySet.as_manager()
tags = TaggableManager(through=TaggedItem)
csv_headers = [

View File

@ -0,0 +1,15 @@
from django.db.models import OuterRef, QuerySet, Subquery
class CircuitQuerySet(QuerySet):
def annotate_sites(self):
"""
Annotate the A and Z termination site names for ordering.
"""
from circuits.models import CircuitTermination
_terminations = CircuitTermination.objects.filter(circuit=OuterRef('pk'))
return self.annotate(
a_side=Subquery(_terminations.filter(term_side='A').values('site__name')[:1]),
z_side=Subquery(_terminations.filter(term_side='Z').values('site__name')[:1]),
)

View File

@ -4,6 +4,7 @@ from circuits.choices import *
from circuits.filters import *
from circuits.models import Circuit, CircuitTermination, CircuitType, Provider
from dcim.models import Region, Site
from tenancy.models import Tenant, TenantGroup
class ProviderTestCase(TestCase):
@ -138,6 +139,20 @@ class CircuitTestCase(TestCase):
)
Site.objects.bulk_create(sites)
tenant_groups = (
TenantGroup(name='Tenant group 1', slug='tenant-group-1'),
TenantGroup(name='Tenant group 2', slug='tenant-group-2'),
TenantGroup(name='Tenant group 3', slug='tenant-group-3'),
)
TenantGroup.objects.bulk_create(tenant_groups)
tenants = (
Tenant(name='Tenant 1', slug='tenant-1', group=tenant_groups[0]),
Tenant(name='Tenant 2', slug='tenant-2', group=tenant_groups[1]),
Tenant(name='Tenant 3', slug='tenant-3', group=tenant_groups[2]),
)
Tenant.objects.bulk_create(tenants)
circuit_types = (
CircuitType(name='Test Circuit Type 1', slug='test-circuit-type-1'),
CircuitType(name='Test Circuit Type 2', slug='test-circuit-type-2'),
@ -151,12 +166,12 @@ class CircuitTestCase(TestCase):
Provider.objects.bulk_create(providers)
circuits = (
Circuit(provider=providers[0], type=circuit_types[0], cid='Test Circuit 1', install_date='2020-01-01', commit_rate=1000, status=CircuitStatusChoices.STATUS_ACTIVE),
Circuit(provider=providers[0], type=circuit_types[0], cid='Test Circuit 2', install_date='2020-01-02', commit_rate=2000, status=CircuitStatusChoices.STATUS_ACTIVE),
Circuit(provider=providers[0], type=circuit_types[0], cid='Test Circuit 3', install_date='2020-01-03', commit_rate=3000, status=CircuitStatusChoices.STATUS_PLANNED),
Circuit(provider=providers[1], type=circuit_types[1], cid='Test Circuit 4', install_date='2020-01-04', commit_rate=4000, status=CircuitStatusChoices.STATUS_PLANNED),
Circuit(provider=providers[1], type=circuit_types[1], cid='Test Circuit 5', install_date='2020-01-05', commit_rate=5000, status=CircuitStatusChoices.STATUS_OFFLINE),
Circuit(provider=providers[1], type=circuit_types[1], cid='Test Circuit 6', install_date='2020-01-06', commit_rate=6000, status=CircuitStatusChoices.STATUS_OFFLINE),
Circuit(provider=providers[0], tenant=tenants[0], type=circuit_types[0], cid='Test Circuit 1', install_date='2020-01-01', commit_rate=1000, status=CircuitStatusChoices.STATUS_ACTIVE),
Circuit(provider=providers[0], tenant=tenants[0], type=circuit_types[0], cid='Test Circuit 2', install_date='2020-01-02', commit_rate=2000, status=CircuitStatusChoices.STATUS_ACTIVE),
Circuit(provider=providers[0], tenant=tenants[1], type=circuit_types[0], cid='Test Circuit 3', install_date='2020-01-03', commit_rate=3000, status=CircuitStatusChoices.STATUS_PLANNED),
Circuit(provider=providers[1], tenant=tenants[1], type=circuit_types[1], cid='Test Circuit 4', install_date='2020-01-04', commit_rate=4000, status=CircuitStatusChoices.STATUS_PLANNED),
Circuit(provider=providers[1], tenant=tenants[2], type=circuit_types[1], cid='Test Circuit 5', install_date='2020-01-05', commit_rate=5000, status=CircuitStatusChoices.STATUS_OFFLINE),
Circuit(provider=providers[1], tenant=tenants[2], type=circuit_types[1], cid='Test Circuit 6', install_date='2020-01-06', commit_rate=6000, status=CircuitStatusChoices.STATUS_OFFLINE),
)
Circuit.objects.bulk_create(circuits)
@ -216,6 +231,20 @@ class CircuitTestCase(TestCase):
params = {'site': [sites[0].slug, sites[1].slug]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
def test_tenant(self):
tenants = Tenant.objects.all()[:2]
params = {'tenant_id': [tenants[0].pk, tenants[1].pk]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 4)
params = {'tenant': [tenants[0].slug, tenants[1].slug]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 4)
def test_tenant_group(self):
tenant_groups = TenantGroup.objects.all()[:2]
params = {'tenant_group_id': [tenant_groups[0].pk, tenant_groups[1].pk]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 4)
params = {'tenant_group': [tenant_groups[0].slug, tenant_groups[1].slug]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 4)
class CircuitTerminationTestCase(TestCase):
queryset = CircuitTermination.objects.all()

View File

@ -37,10 +37,14 @@ class ProviderView(PermissionRequiredMixin, View):
def get(self, request, slug):
provider = get_object_or_404(Provider, slug=slug)
circuits = Circuit.objects.filter(provider=provider).prefetch_related('type', 'tenant', 'terminations__site')
circuits = Circuit.objects.filter(
provider=provider
).prefetch_related(
'type', 'tenant', 'terminations__site'
).annotate_sites()
show_graphs = Graph.objects.filter(type__model='provider').exists()
circuits_table = tables.CircuitTable(circuits, orderable=False)
circuits_table = tables.CircuitTable(circuits)
circuits_table.columns.hide('provider')
paginate = {
@ -142,10 +146,7 @@ class CircuitListView(PermissionRequiredMixin, ObjectListView):
_terminations = CircuitTermination.objects.filter(circuit=OuterRef('pk'))
queryset = Circuit.objects.prefetch_related(
'provider', 'type', 'tenant', 'terminations__site'
).annotate(
a_side=Subquery(_terminations.filter(term_side='A').values('site__name')[:1]),
z_side=Subquery(_terminations.filter(term_side='Z').values('site__name')[:1]),
)
).annotate_sites()
filterset = filters.CircuitFilterSet
filterset_form = forms.CircuitFilterForm
table = tables.CircuitTable

View File

@ -3,8 +3,8 @@ from rest_framework import serializers
from dcim.constants import CONNECTION_STATUS_CHOICES
from dcim.models import (
Cable, ConsolePort, ConsoleServerPort, Device, DeviceBay, DeviceType, DeviceRole, FrontPort, FrontPortTemplate,
Interface, Manufacturer, Platform, PowerFeed, PowerOutlet, PowerPanel, PowerPort, Rack, RackGroup, RackRole,
RearPort, RearPortTemplate, Region, Site, VirtualChassis,
Interface, Manufacturer, Platform, PowerFeed, PowerOutlet, PowerPanel, PowerPort, PowerPortTemplate, Rack,
RackGroup, RackRole, RearPort, RearPortTemplate, Region, Site, VirtualChassis,
)
from utilities.api import ChoiceField, WritableNestedSerializer
@ -25,6 +25,7 @@ __all__ = [
'NestedPowerOutletSerializer',
'NestedPowerPanelSerializer',
'NestedPowerPortSerializer',
'NestedPowerPortTemplateSerializer',
'NestedRackGroupSerializer',
'NestedRackRoleSerializer',
'NestedRackSerializer',
@ -111,6 +112,14 @@ class NestedDeviceTypeSerializer(WritableNestedSerializer):
fields = ['id', 'url', 'manufacturer', 'model', 'slug', 'display_name', 'device_count']
class NestedPowerPortTemplateSerializer(WritableNestedSerializer):
url = serializers.HyperlinkedIdentityField(view_name='dcim-api:powerporttemplate-detail')
class Meta:
model = PowerPortTemplate
fields = ['id', 'url', 'name']
class NestedRearPortTemplateSerializer(WritableNestedSerializer):
url = serializers.HyperlinkedIdentityField(view_name='dcim-api:rearporttemplate-detail')

View File

@ -172,6 +172,10 @@ class RackReservationSerializer(ValidatedModelSerializer):
class RackElevationDetailFilterSerializer(serializers.Serializer):
q = serializers.CharField(
required=False,
default=None
)
face = serializers.ChoiceField(
choices=DeviceFaceChoices,
default=DeviceFaceChoices.FACE_FRONT
@ -278,7 +282,7 @@ class PowerOutletTemplateSerializer(ValidatedModelSerializer):
allow_blank=True,
required=False
)
power_port = PowerPortTemplateSerializer(
power_port = NestedPowerPortTemplateSerializer(
required=False
)
feed_leg = ChoiceField(
@ -294,7 +298,7 @@ class PowerOutletTemplateSerializer(ValidatedModelSerializer):
class InterfaceTemplateSerializer(ValidatedModelSerializer):
device_type = NestedDeviceTypeSerializer()
type = ChoiceField(choices=InterfaceTypeChoices, required=False)
type = ChoiceField(choices=InterfaceTypeChoices)
class Meta:
model = InterfaceTemplate
@ -514,7 +518,7 @@ class PowerPortSerializer(TaggitSerializer, ConnectedEndpointSerializer):
class InterfaceSerializer(TaggitSerializer, ConnectedEndpointSerializer):
device = NestedDeviceSerializer()
type = ChoiceField(choices=InterfaceTypeChoices, required=False)
type = ChoiceField(choices=InterfaceTypeChoices)
lag = NestedInterfaceSerializer(required=False, allow_null=True)
mode = ChoiceField(choices=InterfaceModeChoices, allow_blank=True, required=False)
untagged_vlan = NestedVLANSerializer(required=False, allow_null=True)

View File

@ -210,6 +210,11 @@ class RackViewSet(CustomFieldModelViewSet):
expand_devices=data['expand_devices']
)
# Enable filtering rack units by ID
q = data['q']
if q:
elevation = [u for u in elevation if q in str(u['id']) or q in str(u['name'])]
page = self.paginate_queryset(elevation)
if page is not None:
rack_units = serializers.RackUnitSerializer(page, many=True, context={'request': request})

View File

@ -55,10 +55,12 @@ class RackTypeChoices(ChoiceSet):
class RackWidthChoices(ChoiceSet):
WIDTH_10IN = 10
WIDTH_19IN = 19
WIDTH_23IN = 23
CHOICES = (
(WIDTH_10IN, '10 inches'),
(WIDTH_19IN, '19 inches'),
(WIDTH_23IN, '23 inches'),
)
@ -836,6 +838,7 @@ class PortTypeChoices(ChoiceSet):
TYPE_8P8C = '8p8c'
TYPE_110_PUNCH = '110-punch'
TYPE_BNC = 'bnc'
TYPE_MRJ21 = 'mrj21'
TYPE_ST = 'st'
TYPE_SC = 'sc'
TYPE_SC_APC = 'sc-apc'
@ -854,6 +857,7 @@ class PortTypeChoices(ChoiceSet):
(TYPE_8P8C, '8P8C'),
(TYPE_110_PUNCH, '110 Punch'),
(TYPE_BNC, 'BNC'),
(TYPE_MRJ21, 'MRJ21'),
),
),
(
@ -904,6 +908,7 @@ class CableTypeChoices(ChoiceSet):
TYPE_CAT7 = 'cat7'
TYPE_DAC_ACTIVE = 'dac-active'
TYPE_DAC_PASSIVE = 'dac-passive'
TYPE_MRJ21_TRUNK = 'mrj21-trunk'
TYPE_COAXIAL = 'coaxial'
TYPE_MMF = 'mmf'
TYPE_MMF_OM1 = 'mmf-om1'
@ -927,6 +932,7 @@ class CableTypeChoices(ChoiceSet):
(TYPE_CAT7, 'CAT7'),
(TYPE_DAC_ACTIVE, 'Direct Attach Copper (Active)'),
(TYPE_DAC_PASSIVE, 'Direct Attach Copper (Passive)'),
(TYPE_MRJ21_TRUNK, 'MRJ21 Trunk'),
(TYPE_COAXIAL, 'Coaxial'),
),
),
@ -973,10 +979,12 @@ class CableStatusChoices(ChoiceSet):
STATUS_CONNECTED = 'connected'
STATUS_PLANNED = 'planned'
STATUS_DECOMMISSIONING = 'decommissioning'
CHOICES = (
(STATUS_CONNECTED, 'Connected'),
(STATUS_PLANNED, 'Planned'),
(STATUS_DECOMMISSIONING, 'Decommissioning'),
)
LEGACY_MAP = {

View File

@ -61,13 +61,10 @@ POWERFEED_MAX_UTILIZATION_DEFAULT = 80 # Percentage
# Cabling and connections
#
# TODO: Replace with CableStatusChoices?
# Console/power/interface connection statuses
CONNECTION_STATUS_PLANNED = False
CONNECTION_STATUS_CONNECTED = True
CONNECTION_STATUS_CHOICES = [
[CONNECTION_STATUS_PLANNED, 'Planned'],
[CONNECTION_STATUS_CONNECTED, 'Connected'],
[False, 'Not Connected'],
[True, 'Connected'],
]
# Cable endpoint types

View File

@ -20,6 +20,16 @@ class RackElevationSVG:
self.rack = rack
self.include_images = include_images
def _get_device_description(self, device):
return '{} ({}) — {} ({}U) {} {}'.format(
device.name,
device.device_role,
device.device_type.display_name,
device.device_type.u_height,
device.asset_tag or '',
device.serial or ''
)
@staticmethod
def _add_gradient(drawing, id_, color):
gradient = drawing.linearGradient(
@ -64,10 +74,7 @@ class RackElevationSVG:
fill='black'
)
)
link.set_desc('{}{} ({}U) {} {}'.format(
device.device_role, device.device_type.display_name,
device.device_type.u_height, device.asset_tag or '', device.serial or ''
))
link.set_desc(self._get_device_description(device))
link.add(drawing.rect(start, end, style='fill: #{}'.format(color), class_='slot'))
hex_color = '#{}'.format(foreground_color(color))
link.add(drawing.text(str(name), insert=text, fill=hex_color))
@ -81,10 +88,7 @@ class RackElevationSVG:
def _draw_device_rear(self, drawing, device, start, end, text):
rect = drawing.rect(start, end, class_="slot blocked")
rect.set_desc('{}{} ({}U) {} {}'.format(
device.device_role, device.device_type.display_name,
device.device_type.u_height, device.asset_tag or '', device.serial or ''
))
rect.set_desc(self._get_device_description(device))
drawing.add(rect)
drawing.add(drawing.text(str(device), insert=text))

View File

@ -2344,6 +2344,11 @@ class DeviceBulkAddInterfaceForm(DeviceBulkAddComponentForm):
class ConsolePortFilterForm(DeviceComponentFilterForm):
model = ConsolePort
type = forms.MultipleChoiceField(
choices=ConsolePortTypeChoices,
required=False,
widget=StaticSelect2Multiple()
)
tag = TagFilterField(model)
@ -2429,6 +2434,11 @@ class ConsolePortCSVForm(forms.ModelForm):
class ConsoleServerPortFilterForm(DeviceComponentFilterForm):
model = ConsoleServerPort
type = forms.MultipleChoiceField(
choices=ConsolePortTypeChoices,
required=False,
widget=StaticSelect2Multiple()
)
tag = TagFilterField(model)
@ -2528,6 +2538,11 @@ class ConsoleServerPortCSVForm(forms.ModelForm):
class PowerPortFilterForm(DeviceComponentFilterForm):
model = PowerPort
type = forms.MultipleChoiceField(
choices=PowerPortTypeChoices,
required=False,
widget=StaticSelect2Multiple()
)
tag = TagFilterField(model)
@ -2633,6 +2648,11 @@ class PowerPortCSVForm(forms.ModelForm):
class PowerOutletFilterForm(DeviceComponentFilterForm):
model = PowerOutlet
type = forms.MultipleChoiceField(
choices=PowerOutletTypeChoices,
required=False,
widget=StaticSelect2Multiple()
)
tag = TagFilterField(model)
@ -2821,6 +2841,11 @@ class PowerOutletBulkDisconnectForm(ConfirmationForm):
class InterfaceFilterForm(DeviceComponentFilterForm):
model = Interface
type = forms.MultipleChoiceField(
choices=InterfaceTypeChoices,
required=False,
widget=StaticSelect2Multiple()
)
enabled = forms.NullBooleanField(
required=False,
widget=StaticSelect2(
@ -3190,6 +3215,11 @@ class InterfaceBulkDisconnectForm(ConfirmationForm):
class FrontPortFilterForm(DeviceComponentFilterForm):
model = FrontPort
type = forms.MultipleChoiceField(
choices=PortTypeChoices,
required=False,
widget=StaticSelect2Multiple()
)
tag = TagFilterField(model)
@ -3379,6 +3409,11 @@ class FrontPortBulkDisconnectForm(ConfirmationForm):
class RearPortFilterForm(DeviceComponentFilterForm):
model = RearPort
type = forms.MultipleChoiceField(
choices=PortTypeChoices,
required=False,
widget=StaticSelect2Multiple()
)
tag = TagFilterField(model)

View File

@ -0,0 +1,19 @@
# Generated by Django 2.2.10 on 2020-03-03 16:59
from django.db import migrations, models
import utilities.validators
class Migration(migrations.Migration):
dependencies = [
('dcim', '0098_devicetype_images'),
]
operations = [
migrations.AlterField(
model_name='powerfeed',
name='voltage',
field=models.SmallIntegerField(default=120, validators=[utilities.validators.ExclusionValidator([0])]),
),
]

View File

@ -6,7 +6,7 @@ from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('dcim', '0098_devicetype_images'),
('dcim', '0099_powerfeed_negative_voltage'),
]
operations = [

View File

@ -20,10 +20,11 @@ from dcim.choices import *
from dcim.constants import *
from dcim.fields import ASNField
from dcim.elevations import RackElevationSVG
from extras.models import ConfigContextModel, CustomFieldModel, TaggedItem
from extras.models import ConfigContextModel, CustomFieldModel, ObjectChange, TaggedItem
from utilities.fields import ColorField, NaturalOrderingField
from utilities.models import ChangeLoggedModel
from utilities.utils import to_meters
from utilities.utils import serialize_object, to_meters
from utilities.validators import ExclusionValidator
from .device_component_templates import (
ConsolePortTemplate, ConsoleServerPortTemplate, DeviceBayTemplate, FrontPortTemplate, InterfaceTemplate,
PowerOutletTemplate, PowerPortTemplate, RearPortTemplate,
@ -118,6 +119,15 @@ class Region(MPTTModel, ChangeLoggedModel):
Q(region__in=self.get_descendants())
).count()
def to_objectchange(self, action):
# Remove MPTT-internal fields
return ObjectChange(
changed_object=self,
object_repr=str(self),
action=action,
object_data=serialize_object(self, exclude=['level', 'lft', 'rght', 'tree_id'])
)
#
# Sites
@ -1766,9 +1776,9 @@ class PowerFeed(ChangeLoggedModel, CableTermination, CustomFieldModel):
choices=PowerFeedPhaseChoices,
default=PowerFeedPhaseChoices.PHASE_SINGLE
)
voltage = models.PositiveSmallIntegerField(
validators=[MinValueValidator(1)],
default=POWERFEED_VOLTAGE_DEFAULT
voltage = models.SmallIntegerField(
default=POWERFEED_VOLTAGE_DEFAULT,
validators=[ExclusionValidator([0])]
)
amperage = models.PositiveSmallIntegerField(
validators=[MinValueValidator(1)],
@ -1850,10 +1860,16 @@ class PowerFeed(ChangeLoggedModel, CableTermination, CustomFieldModel):
self.rack, self.rack.site, self.power_panel, self.power_panel.site
))
# AC voltage cannot be negative
if self.voltage < 0 and self.supply == PowerFeedSupplyChoices.SUPPLY_AC:
raise ValidationError({
"voltage": "Voltage cannot be negative for AC supply"
})
def save(self, *args, **kwargs):
# Cache the available_power property on the instance
kva = self.voltage * self.amperage * (self.max_utilization / 100)
kva = abs(self.voltage) * self.amperage * (self.max_utilization / 100)
if self.phase == PowerFeedPhaseChoices.PHASE_3PHASE:
self.available_power = round(kva * 1.732)
else:
@ -1956,6 +1972,7 @@ class Cable(ChangeLoggedModel):
STATUS_CLASS_MAP = {
CableStatusChoices.STATUS_CONNECTED: 'success',
CableStatusChoices.STATUS_PLANNED: 'info',
CableStatusChoices.STATUS_DECOMMISSIONING: 'warning',
}
class Meta:
@ -2116,14 +2133,14 @@ class Cable(ChangeLoggedModel):
b_path = self.termination_a.trace()
# Determine overall path status (connected or planned)
if self.status == CableStatusChoices.STATUS_PLANNED:
path_status = CONNECTION_STATUS_PLANNED
else:
path_status = CONNECTION_STATUS_CONNECTED
if self.status == CableStatusChoices.STATUS_CONNECTED:
path_status = True
for segment in a_path[1:] + b_path[1:]:
if segment[1] is None or segment[1].status == CableStatusChoices.STATUS_PLANNED:
path_status = CONNECTION_STATUS_PLANNED
if segment[1] is None or segment[1].status != CableStatusChoices.STATUS_CONNECTED:
path_status = False
break
else:
path_status = False
a_endpoint = a_path[-1][2]
b_endpoint = b_path[-1][2]

View File

@ -1,4 +1,4 @@
from django.core.exceptions import ValidationError
from django.core.exceptions import ObjectDoesNotExist, ValidationError
from django.core.validators import MaxValueValidator, MinValueValidator
from django.db import models
@ -37,11 +37,17 @@ class ComponentTemplateModel(models.Model):
raise NotImplementedError()
def to_objectchange(self, action):
# Annotate the parent DeviceType
try:
device_type = self.device_type
except ObjectDoesNotExist:
# The parent DeviceType has already been deleted
device_type = None
return ObjectChange(
changed_object=self,
object_repr=str(self),
action=action,
related_object=self.device_type,
related_object=device_type,
object_data=serialize_object(self)
)

View File

@ -360,9 +360,21 @@ class PowerPort(CableTermination, ComponentModel):
@property
def connected_endpoint(self):
if self._connected_poweroutlet:
return self._connected_poweroutlet
return self._connected_powerfeed
"""
Return the connected PowerOutlet, if it exists, or the connected PowerFeed, if it exists. We have to check for
ObjectDoesNotExist in case the referenced object has been deleted from the database.
"""
try:
if self._connected_poweroutlet:
return self._connected_poweroutlet
except ObjectDoesNotExist:
pass
try:
if self._connected_powerfeed:
return self._connected_powerfeed
except ObjectDoesNotExist:
pass
return None
@connected_endpoint.setter
def connected_endpoint(self, value):
@ -717,9 +729,21 @@ class Interface(CableTermination, ComponentModel):
@property
def connected_endpoint(self):
if self._connected_interface:
return self._connected_interface
return self._connected_circuittermination
"""
Return the connected Interface, if it exists, or the connected CircuitTermination, if it exists. We have to
check for ObjectDoesNotExist in case the referenced object has been deleted from the database.
"""
try:
if self._connected_interface:
return self._connected_interface
except ObjectDoesNotExist:
pass
try:
if self._connected_circuittermination:
return self._connected_circuittermination
except ObjectDoesNotExist:
pass
return None
@connected_endpoint.setter
def connected_endpoint(self, value):

View File

@ -4,7 +4,6 @@ from netaddr import IPNetwork
from rest_framework import status
from circuits.models import Circuit, CircuitTermination, CircuitType, Provider
from dcim.api import serializers
from dcim.choices import *
from dcim.constants import *
from dcim.models import (
@ -589,6 +588,28 @@ class RackTest(APITestCase):
self.assertEqual(response.data['name'], self.rack1.name)
def test_get_elevation_rack_units(self):
url = '{}?q=3'.format(reverse('dcim-api:rack-elevation', kwargs={'pk': self.rack1.pk}))
response = self.client.get(url, **self.header)
self.assertEqual(response.data['count'], 13)
url = '{}?q=U3'.format(reverse('dcim-api:rack-elevation', kwargs={'pk': self.rack1.pk}))
response = self.client.get(url, **self.header)
self.assertEqual(response.data['count'], 11)
url = '{}?q=10'.format(reverse('dcim-api:rack-elevation', kwargs={'pk': self.rack1.pk}))
response = self.client.get(url, **self.header)
self.assertEqual(response.data['count'], 1)
url = '{}?q=U20'.format(reverse('dcim-api:rack-elevation', kwargs={'pk': self.rack1.pk}))
response = self.client.get(url, **self.header)
self.assertEqual(response.data['count'], 1)
def test_get_rack_elevation(self):
url = reverse('dcim-api:rack-elevation', kwargs={'pk': self.rack1.pk})
@ -1441,13 +1462,13 @@ class InterfaceTemplateTest(APITestCase):
manufacturer=self.manufacturer, model='Test Device Type 1', slug='test-device-type-1'
)
self.interfacetemplate1 = InterfaceTemplate.objects.create(
device_type=self.devicetype, name='Test Interface Template 1'
device_type=self.devicetype, name='Test Interface Template 1', type='1000base-t'
)
self.interfacetemplate2 = InterfaceTemplate.objects.create(
device_type=self.devicetype, name='Test Interface Template 2'
device_type=self.devicetype, name='Test Interface Template 2', type='1000base-t'
)
self.interfacetemplate3 = InterfaceTemplate.objects.create(
device_type=self.devicetype, name='Test Interface Template 3'
device_type=self.devicetype, name='Test Interface Template 3', type='1000base-t'
)
def test_get_interfacetemplate(self):
@ -1469,6 +1490,7 @@ class InterfaceTemplateTest(APITestCase):
data = {
'device_type': self.devicetype.pk,
'name': 'Test Interface Template 4',
'type': '1000base-t',
}
url = reverse('dcim-api:interfacetemplate-list')
@ -1486,14 +1508,17 @@ class InterfaceTemplateTest(APITestCase):
{
'device_type': self.devicetype.pk,
'name': 'Test Interface Template 4',
'type': '1000base-t',
},
{
'device_type': self.devicetype.pk,
'name': 'Test Interface Template 5',
'type': '1000base-t',
},
{
'device_type': self.devicetype.pk,
'name': 'Test Interface Template 6',
'type': '1000base-t',
},
]
@ -1511,6 +1536,7 @@ class InterfaceTemplateTest(APITestCase):
data = {
'device_type': self.devicetype.pk,
'name': 'Test Interface Template X',
'type': '1000base-x-gbic',
}
url = reverse('dcim-api:interfacetemplate-detail', kwargs={'pk': self.interfacetemplate1.pk})
@ -2621,9 +2647,9 @@ class InterfaceTest(APITestCase):
self.device = Device.objects.create(
device_type=devicetype, device_role=devicerole, name='Test Device 1', site=site
)
self.interface1 = Interface.objects.create(device=self.device, name='Test Interface 1')
self.interface2 = Interface.objects.create(device=self.device, name='Test Interface 2')
self.interface3 = Interface.objects.create(device=self.device, name='Test Interface 3')
self.interface1 = Interface.objects.create(device=self.device, name='Test Interface 1', type='1000base-t')
self.interface2 = Interface.objects.create(device=self.device, name='Test Interface 2', type='1000base-t')
self.interface3 = Interface.objects.create(device=self.device, name='Test Interface 3', type='1000base-t')
self.vlan1 = VLAN.objects.create(name="Test VLAN 1", vid=1)
self.vlan2 = VLAN.objects.create(name="Test VLAN 2", vid=2)
@ -2684,6 +2710,7 @@ class InterfaceTest(APITestCase):
data = {
'device': self.device.pk,
'name': 'Test Interface 4',
'type': '1000base-t',
}
url = reverse('dcim-api:interface-list')
@ -2700,6 +2727,7 @@ class InterfaceTest(APITestCase):
data = {
'device': self.device.pk,
'name': 'Test Interface 4',
'type': '1000base-t',
'mode': InterfaceModeChoices.MODE_TAGGED,
'untagged_vlan': self.vlan3.id,
'tagged_vlans': [self.vlan1.id, self.vlan2.id],
@ -2721,14 +2749,17 @@ class InterfaceTest(APITestCase):
{
'device': self.device.pk,
'name': 'Test Interface 4',
'type': '1000base-t',
},
{
'device': self.device.pk,
'name': 'Test Interface 5',
'type': '1000base-t',
},
{
'device': self.device.pk,
'name': 'Test Interface 6',
'type': '1000base-t',
},
]
@ -2747,6 +2778,7 @@ class InterfaceTest(APITestCase):
{
'device': self.device.pk,
'name': 'Test Interface 4',
'type': '1000base-t',
'mode': InterfaceModeChoices.MODE_TAGGED,
'untagged_vlan': self.vlan2.id,
'tagged_vlans': [self.vlan1.id],
@ -2754,6 +2786,7 @@ class InterfaceTest(APITestCase):
{
'device': self.device.pk,
'name': 'Test Interface 5',
'type': '1000base-t',
'mode': InterfaceModeChoices.MODE_TAGGED,
'untagged_vlan': self.vlan2.id,
'tagged_vlans': [self.vlan1.id],
@ -2761,6 +2794,7 @@ class InterfaceTest(APITestCase):
{
'device': self.device.pk,
'name': 'Test Interface 6',
'type': '1000base-t',
'mode': InterfaceModeChoices.MODE_TAGGED,
'untagged_vlan': self.vlan2.id,
'tagged_vlans': [self.vlan1.id],
@ -2786,6 +2820,7 @@ class InterfaceTest(APITestCase):
data = {
'device': self.device.pk,
'name': 'Test Interface X',
'type': '1000base-x-gbic',
'lag': lag_interface.pk,
}

View File

@ -11,7 +11,7 @@ from dcim.models import (
VirtualChassis,
)
from ipam.models import IPAddress
from tenancy.models import Tenant
from tenancy.models import Tenant, TenantGroup
from virtualization.models import Cluster, ClusterType
@ -76,10 +76,24 @@ class SiteTestCase(TestCase):
for region in regions:
region.save()
tenant_groups = (
TenantGroup(name='Tenant group 1', slug='tenant-group-1'),
TenantGroup(name='Tenant group 2', slug='tenant-group-2'),
TenantGroup(name='Tenant group 3', slug='tenant-group-3'),
)
TenantGroup.objects.bulk_create(tenant_groups)
tenants = (
Tenant(name='Tenant 1', slug='tenant-1', group=tenant_groups[0]),
Tenant(name='Tenant 2', slug='tenant-2', group=tenant_groups[1]),
Tenant(name='Tenant 3', slug='tenant-3', group=tenant_groups[2]),
)
Tenant.objects.bulk_create(tenants)
sites = (
Site(name='Site 1', slug='site-1', region=regions[0], status=SiteStatusChoices.STATUS_ACTIVE, facility='Facility 1', asn=65001, latitude=10, longitude=10, contact_name='Contact 1', contact_phone='123-555-0001', contact_email='contact1@example.com'),
Site(name='Site 2', slug='site-2', region=regions[1], status=SiteStatusChoices.STATUS_PLANNED, facility='Facility 2', asn=65002, latitude=20, longitude=20, contact_name='Contact 2', contact_phone='123-555-0002', contact_email='contact2@example.com'),
Site(name='Site 3', slug='site-3', region=regions[2], status=SiteStatusChoices.STATUS_RETIRED, facility='Facility 3', asn=65003, latitude=30, longitude=30, contact_name='Contact 3', contact_phone='123-555-0003', contact_email='contact3@example.com'),
Site(name='Site 1', slug='site-1', region=regions[0], tenant=tenants[0], status=SiteStatusChoices.STATUS_ACTIVE, facility='Facility 1', asn=65001, latitude=10, longitude=10, contact_name='Contact 1', contact_phone='123-555-0001', contact_email='contact1@example.com'),
Site(name='Site 2', slug='site-2', region=regions[1], tenant=tenants[1], status=SiteStatusChoices.STATUS_PLANNED, facility='Facility 2', asn=65002, latitude=20, longitude=20, contact_name='Contact 2', contact_phone='123-555-0002', contact_email='contact2@example.com'),
Site(name='Site 3', slug='site-3', region=regions[2], tenant=tenants[2], status=SiteStatusChoices.STATUS_RETIRED, facility='Facility 3', asn=65003, latitude=30, longitude=30, contact_name='Contact 3', contact_phone='123-555-0003', contact_email='contact3@example.com'),
)
Site.objects.bulk_create(sites)
@ -140,6 +154,20 @@ class SiteTestCase(TestCase):
params = {'region': [regions[0].slug, regions[1].slug]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
def test_tenant(self):
tenants = Tenant.objects.all()[:2]
params = {'tenant_id': [tenants[0].pk, tenants[1].pk]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
params = {'tenant': [tenants[0].slug, tenants[1].slug]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
def test_tenant_group(self):
tenant_groups = TenantGroup.objects.all()[:2]
params = {'tenant_group_id': [tenant_groups[0].pk, tenant_groups[1].pk]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
params = {'tenant_group': [tenant_groups[0].slug, tenant_groups[1].slug]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
class RackGroupTestCase(TestCase):
queryset = RackGroup.objects.all()
@ -266,10 +294,24 @@ class RackTestCase(TestCase):
)
RackRole.objects.bulk_create(rack_roles)
tenant_groups = (
TenantGroup(name='Tenant group 1', slug='tenant-group-1'),
TenantGroup(name='Tenant group 2', slug='tenant-group-2'),
TenantGroup(name='Tenant group 3', slug='tenant-group-3'),
)
TenantGroup.objects.bulk_create(tenant_groups)
tenants = (
Tenant(name='Tenant 1', slug='tenant-1', group=tenant_groups[0]),
Tenant(name='Tenant 2', slug='tenant-2', group=tenant_groups[1]),
Tenant(name='Tenant 3', slug='tenant-3', group=tenant_groups[2]),
)
Tenant.objects.bulk_create(tenants)
racks = (
Rack(name='Rack 1', facility_id='rack-1', site=sites[0], group=rack_groups[0], status=RackStatusChoices.STATUS_ACTIVE, role=rack_roles[0], serial='ABC', asset_tag='1001', type=RackTypeChoices.TYPE_2POST, width=RackWidthChoices.WIDTH_19IN, u_height=42, desc_units=False, outer_width=100, outer_depth=100, outer_unit=RackDimensionUnitChoices.UNIT_MILLIMETER),
Rack(name='Rack 2', facility_id='rack-2', site=sites[1], group=rack_groups[1], status=RackStatusChoices.STATUS_PLANNED, role=rack_roles[1], serial='DEF', asset_tag='1002', type=RackTypeChoices.TYPE_4POST, width=RackWidthChoices.WIDTH_19IN, u_height=43, desc_units=False, outer_width=200, outer_depth=200, outer_unit=RackDimensionUnitChoices.UNIT_MILLIMETER),
Rack(name='Rack 3', facility_id='rack-3', site=sites[2], group=rack_groups[2], status=RackStatusChoices.STATUS_RESERVED, role=rack_roles[2], serial='GHI', asset_tag='1003', type=RackTypeChoices.TYPE_CABINET, width=RackWidthChoices.WIDTH_23IN, u_height=44, desc_units=True, outer_width=300, outer_depth=300, outer_unit=RackDimensionUnitChoices.UNIT_INCH),
Rack(name='Rack 1', facility_id='rack-1', site=sites[0], group=rack_groups[0], tenant=tenants[0], status=RackStatusChoices.STATUS_ACTIVE, role=rack_roles[0], serial='ABC', asset_tag='1001', type=RackTypeChoices.TYPE_2POST, width=RackWidthChoices.WIDTH_19IN, u_height=42, desc_units=False, outer_width=100, outer_depth=100, outer_unit=RackDimensionUnitChoices.UNIT_MILLIMETER),
Rack(name='Rack 2', facility_id='rack-2', site=sites[1], group=rack_groups[1], tenant=tenants[1], status=RackStatusChoices.STATUS_PLANNED, role=rack_roles[1], serial='DEF', asset_tag='1002', type=RackTypeChoices.TYPE_4POST, width=RackWidthChoices.WIDTH_19IN, u_height=43, desc_units=False, outer_width=200, outer_depth=200, outer_unit=RackDimensionUnitChoices.UNIT_MILLIMETER),
Rack(name='Rack 3', facility_id='rack-3', site=sites[2], group=rack_groups[2], tenant=tenants[2], status=RackStatusChoices.STATUS_RESERVED, role=rack_roles[2], serial='GHI', asset_tag='1003', type=RackTypeChoices.TYPE_CABINET, width=RackWidthChoices.WIDTH_23IN, u_height=44, desc_units=True, outer_width=300, outer_depth=300, outer_unit=RackDimensionUnitChoices.UNIT_INCH),
)
Rack.objects.bulk_create(racks)
@ -366,6 +408,20 @@ class RackTestCase(TestCase):
params = {'serial': 'abc'}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 1)
def test_tenant(self):
tenants = Tenant.objects.all()[:2]
params = {'tenant_id': [tenants[0].pk, tenants[1].pk]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
params = {'tenant': [tenants[0].slug, tenants[1].slug]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
def test_tenant_group(self):
tenant_groups = TenantGroup.objects.all()[:2]
params = {'tenant_group_id': [tenant_groups[0].pk, tenant_groups[1].pk]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
params = {'tenant_group': [tenant_groups[0].slug, tenant_groups[1].slug]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
class RackReservationTestCase(TestCase):
queryset = RackReservation.objects.all()
@ -402,10 +458,24 @@ class RackReservationTestCase(TestCase):
)
User.objects.bulk_create(users)
tenant_groups = (
TenantGroup(name='Tenant group 1', slug='tenant-group-1'),
TenantGroup(name='Tenant group 2', slug='tenant-group-2'),
TenantGroup(name='Tenant group 3', slug='tenant-group-3'),
)
TenantGroup.objects.bulk_create(tenant_groups)
tenants = (
Tenant(name='Tenant 1', slug='tenant-1', group=tenant_groups[0]),
Tenant(name='Tenant 2', slug='tenant-2', group=tenant_groups[1]),
Tenant(name='Tenant 3', slug='tenant-3', group=tenant_groups[2]),
)
Tenant.objects.bulk_create(tenants)
reservations = (
RackReservation(rack=racks[0], units=[1, 2, 3], user=users[0]),
RackReservation(rack=racks[1], units=[4, 5, 6], user=users[1]),
RackReservation(rack=racks[2], units=[7, 8, 9], user=users[2]),
RackReservation(rack=racks[0], units=[1, 2, 3], user=users[0], tenant=tenants[0]),
RackReservation(rack=racks[1], units=[4, 5, 6], user=users[1], tenant=tenants[1]),
RackReservation(rack=racks[2], units=[7, 8, 9], user=users[2], tenant=tenants[2]),
)
RackReservation.objects.bulk_create(reservations)
@ -436,6 +506,20 @@ class RackReservationTestCase(TestCase):
# params = {'user': [users[0].username, users[1].username]}
# self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
def test_tenant(self):
tenants = Tenant.objects.all()[:2]
params = {'tenant_id': [tenants[0].pk, tenants[1].pk]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
params = {'tenant': [tenants[0].slug, tenants[1].slug]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
def test_tenant_group(self):
tenant_groups = TenantGroup.objects.all()[:2]
params = {'tenant_group_id': [tenant_groups[0].pk, tenant_groups[1].pk]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
params = {'tenant_group': [tenant_groups[0].slug, tenant_groups[1].slug]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
class ManufacturerTestCase(TestCase):
queryset = Manufacturer.objects.all()
@ -1099,10 +1183,24 @@ class DeviceTestCase(TestCase):
)
Cluster.objects.bulk_create(clusters)
tenant_groups = (
TenantGroup(name='Tenant group 1', slug='tenant-group-1'),
TenantGroup(name='Tenant group 2', slug='tenant-group-2'),
TenantGroup(name='Tenant group 3', slug='tenant-group-3'),
)
TenantGroup.objects.bulk_create(tenant_groups)
tenants = (
Tenant(name='Tenant 1', slug='tenant-1', group=tenant_groups[0]),
Tenant(name='Tenant 2', slug='tenant-2', group=tenant_groups[1]),
Tenant(name='Tenant 3', slug='tenant-3', group=tenant_groups[2]),
)
Tenant.objects.bulk_create(tenants)
devices = (
Device(name='Device 1', device_type=device_types[0], device_role=device_roles[0], platform=platforms[0], serial='ABC', asset_tag='1001', site=sites[0], rack=racks[0], position=1, face=DeviceFaceChoices.FACE_FRONT, status=DeviceStatusChoices.STATUS_ACTIVE, cluster=clusters[0], local_context_data={"foo": 123}),
Device(name='Device 2', device_type=device_types[1], device_role=device_roles[1], platform=platforms[1], serial='DEF', asset_tag='1002', site=sites[1], rack=racks[1], position=2, face=DeviceFaceChoices.FACE_FRONT, status=DeviceStatusChoices.STATUS_STAGED, cluster=clusters[1]),
Device(name='Device 3', device_type=device_types[2], device_role=device_roles[2], platform=platforms[2], serial='GHI', asset_tag='1003', site=sites[2], rack=racks[2], position=3, face=DeviceFaceChoices.FACE_REAR, status=DeviceStatusChoices.STATUS_FAILED, cluster=clusters[2]),
Device(name='Device 1', device_type=device_types[0], device_role=device_roles[0], platform=platforms[0], tenant=tenants[0], serial='ABC', asset_tag='1001', site=sites[0], rack=racks[0], position=1, face=DeviceFaceChoices.FACE_FRONT, status=DeviceStatusChoices.STATUS_ACTIVE, cluster=clusters[0], local_context_data={"foo": 123}),
Device(name='Device 2', device_type=device_types[1], device_role=device_roles[1], platform=platforms[1], tenant=tenants[1], serial='DEF', asset_tag='1002', site=sites[1], rack=racks[1], position=2, face=DeviceFaceChoices.FACE_FRONT, status=DeviceStatusChoices.STATUS_STAGED, cluster=clusters[1]),
Device(name='Device 3', device_type=device_types[2], device_role=device_roles[2], platform=platforms[2], tenant=tenants[2], serial='GHI', asset_tag='1003', site=sites[2], rack=racks[2], position=3, face=DeviceFaceChoices.FACE_REAR, status=DeviceStatusChoices.STATUS_FAILED, cluster=clusters[2]),
)
Device.objects.bulk_create(devices)
@ -1333,6 +1431,20 @@ class DeviceTestCase(TestCase):
params = {'local_context_data': 'false'}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
def test_tenant(self):
tenants = Tenant.objects.all()[:2]
params = {'tenant_id': [tenants[0].pk, tenants[1].pk]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
params = {'tenant': [tenants[0].slug, tenants[1].slug]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
def test_tenant_group(self):
tenant_groups = TenantGroup.objects.all()[:2]
params = {'tenant_group_id': [tenant_groups[0].pk, tenant_groups[1].pk]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
params = {'tenant_group': [tenant_groups[0].slug, tenant_groups[1].slug]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
class ConsolePortTestCase(TestCase):
queryset = ConsolePort.objects.all()

View File

@ -2,7 +2,6 @@ from django.core.exceptions import ValidationError
from django.test import TestCase
from dcim.choices import *
from dcim.constants import CONNECTION_STATUS_CONNECTED, CONNECTION_STATUS_PLANNED
from dcim.models import *
from tenancy.models import Tenant
@ -522,14 +521,14 @@ class CablePathTestCase(TestCase):
cable3.save()
interface1 = Interface.objects.get(pk=self.interface1.pk)
self.assertEqual(interface1.connected_endpoint, self.interface2)
self.assertEqual(interface1.connection_status, CONNECTION_STATUS_PLANNED)
self.assertFalse(interface1.connection_status)
# Switch third segment from planned to connected
cable3.status = CableStatusChoices.STATUS_CONNECTED
cable3.save()
interface1 = Interface.objects.get(pk=self.interface1.pk)
self.assertEqual(interface1.connected_endpoint, self.interface2)
self.assertEqual(interface1.connection_status, CONNECTION_STATUS_CONNECTED)
self.assertTrue(interface1.connection_status)
def test_path_teardown(self):
@ -542,7 +541,7 @@ class CablePathTestCase(TestCase):
cable3.save()
interface1 = Interface.objects.get(pk=self.interface1.pk)
self.assertEqual(interface1.connected_endpoint, self.interface2)
self.assertEqual(interface1.connection_status, CONNECTION_STATUS_CONNECTED)
self.assertTrue(interface1.connection_status)
# Remove a cable
cable2.delete()

View File

@ -357,7 +357,7 @@ class RackElevationListView(PermissionRequiredMixin, View):
def get(self, request):
racks = Rack.objects.prefetch_related('site', 'group', 'tenant', 'role', 'devices__device_type')
racks = Rack.objects.prefetch_related('role')
racks = filters.RackFilterSet(request.GET, racks).qs
total_count = racks.count()

View File

@ -26,7 +26,7 @@ class WebhookForm(forms.ModelForm):
class Meta:
model = Webhook
exclude = []
exclude = ()
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
@ -38,13 +38,35 @@ class WebhookForm(forms.ModelForm):
@admin.register(Webhook, site=admin_site)
class WebhookAdmin(admin.ModelAdmin):
list_display = [
'name', 'models', 'payload_url', 'http_content_type', 'enabled', 'type_create', 'type_update',
'type_delete', 'ssl_verification',
'name', 'models', 'payload_url', 'http_content_type', 'enabled', 'type_create', 'type_update', 'type_delete',
'ssl_verification',
]
list_filter = [
'enabled', 'type_create', 'type_update', 'type_delete', 'obj_type',
]
form = WebhookForm
fieldsets = (
(None, {
'fields': (
'name', 'obj_type', 'enabled',
)
}),
('Events', {
'fields': (
'type_create', 'type_update', 'type_delete',
)
}),
('HTTP Request', {
'fields': (
'payload_url', 'http_method', 'http_content_type', 'additional_headers', 'body_template', 'secret',
)
}),
('SSL', {
'fields': (
'ssl_verification', 'ca_file_path',
)
})
)
def models(self, obj):
return ', '.join([ct.name for ct in obj.obj_type.all()])

View File

@ -14,7 +14,7 @@ from extras.models import (
ConfigContext, CustomFieldChoice, ExportTemplate, Graph, ImageAttachment, ObjectChange, ReportResult, Tag,
)
from extras.reports import get_report, get_reports
from extras.scripts import get_script, get_scripts
from extras.scripts import get_script, get_scripts, run_script
from utilities.api import FieldChoicesViewSet, IsAuthenticatedOrLoginNotRequired, ModelViewSet
from . import serializers
@ -265,8 +265,9 @@ class ScriptViewSet(ViewSet):
input_serializer = serializers.ScriptInputSerializer(data=request.data)
if input_serializer.is_valid():
output = script.run(input_serializer.data['data'])
script.output = output
data = input_serializer.data['data']
commit = input_serializer.data['commit']
script.output, execution_time = run_script(script, data, request, commit)
output_serializer = serializers.ScriptOutputSerializer(script)
return Response(output_serializer.data)

View File

@ -124,17 +124,18 @@ class TemplateLanguageChoices(ChoiceSet):
# Webhooks
#
class WebhookContentTypeChoices(ChoiceSet):
class WebhookHttpMethodChoices(ChoiceSet):
CONTENTTYPE_JSON = 'application/json'
CONTENTTYPE_FORMDATA = 'application/x-www-form-urlencoded'
METHOD_GET = 'GET'
METHOD_POST = 'POST'
METHOD_PUT = 'PUT'
METHOD_PATCH = 'PATCH'
METHOD_DELETE = 'DELETE'
CHOICES = (
(CONTENTTYPE_JSON, 'JSON'),
(CONTENTTYPE_FORMDATA, 'Form data'),
(METHOD_GET, 'GET'),
(METHOD_POST, 'POST'),
(METHOD_PUT, 'PUT'),
(METHOD_PATCH, 'PATCH'),
(METHOD_DELETE, 'DELETE'),
)
LEGACY_MAP = {
CONTENTTYPE_JSON: 1,
CONTENTTYPE_FORMDATA: 2,
}

View File

@ -138,6 +138,8 @@ LOG_LEVEL_CODES = {
LOG_FAILURE: 'failure',
}
HTTP_CONTENT_TYPE_JSON = 'application/json'
# Models which support registered webhooks
WEBHOOK_MODELS = Q(
Q(app_label='circuits', model__in=[

View File

@ -5,11 +5,14 @@ from copy import deepcopy
from datetime import timedelta
from django.conf import settings
from django.contrib import messages
from django.db.models.signals import pre_delete, post_save
from django.utils import timezone
from django_prometheus.models import model_deletes, model_inserts, model_updates
from redis.exceptions import RedisError
from extras.utils import is_taggable
from utilities.api import is_api_request
from utilities.querysets import DummyQuerySet
from .choices import ObjectChangeActionChoices
from .models import ObjectChange
@ -98,7 +101,12 @@ class ObjectChangeMiddleware(object):
if not _thread_locals.changed_objects:
return response
# Disconnect our receivers from the post_save and post_delete signals.
post_save.disconnect(handle_changed_object, dispatch_uid='handle_changed_object')
pre_delete.disconnect(handle_deleted_object, dispatch_uid='handle_deleted_object')
# Create records for any cached objects that were changed.
redis_failed = False
for instance, action in _thread_locals.changed_objects:
# Refresh cached custom field values
@ -114,7 +122,16 @@ class ObjectChangeMiddleware(object):
objectchange.save()
# Enqueue webhooks
enqueue_webhooks(instance, request.user, request.id, action)
try:
enqueue_webhooks(instance, request.user, request.id, action)
except RedisError as e:
if not redis_failed and not is_api_request(request):
messages.error(
request,
"There was an error processing webhooks for this request. Check that the Redis service is "
"running and reachable. The full error details were: {}".format(e)
)
redis_failed = True
# Increment metric counters
if action == ObjectChangeActionChoices.ACTION_CREATE:

View File

@ -0,0 +1,48 @@
import json
from django.db import migrations, models
def json_to_text(apps, schema_editor):
"""
Convert a JSON representation of HTTP headers to key-value pairs (one header per line)
"""
Webhook = apps.get_model('extras', 'Webhook')
for webhook in Webhook.objects.exclude(additional_headers=''):
data = json.loads(webhook.additional_headers)
headers = ['{}: {}'.format(k, v) for k, v in data.items()]
Webhook.objects.filter(pk=webhook.pk).update(additional_headers='\n'.join(headers))
class Migration(migrations.Migration):
dependencies = [
('extras', '0037_configcontexts_clusters'),
]
operations = [
migrations.AddField(
model_name='webhook',
name='http_method',
field=models.CharField(default='POST', max_length=30),
),
migrations.AddField(
model_name='webhook',
name='body_template',
field=models.TextField(blank=True),
),
migrations.AlterField(
model_name='webhook',
name='additional_headers',
field=models.TextField(blank=True, default=''),
preserve_default=False,
),
migrations.AlterField(
model_name='webhook',
name='http_content_type',
field=models.CharField(default='application/json', max_length=100),
),
migrations.RunPython(
code=json_to_text
),
]

View File

@ -1,3 +1,4 @@
import json
from collections import OrderedDict
from datetime import date
@ -12,6 +13,7 @@ from django.http import HttpResponse
from django.template import Template, Context
from django.urls import reverse
from django.utils.text import slugify
from rest_framework.utils.encoders import JSONEncoder
from taggit.models import TagBase, GenericTaggedItemBase
from utilities.fields import ColorField
@ -52,7 +54,6 @@ class Webhook(models.Model):
delete in NetBox. The request will contain a representation of the object, which the remote application can act on.
Each Webhook can be limited to firing only on certain actions or certain object types.
"""
obj_type = models.ManyToManyField(
to=ContentType,
related_name='webhooks',
@ -81,17 +82,33 @@ class Webhook(models.Model):
verbose_name='URL',
help_text="A POST will be sent to this URL when the webhook is called."
)
http_content_type = models.CharField(
max_length=50,
choices=WebhookContentTypeChoices,
default=WebhookContentTypeChoices.CONTENTTYPE_JSON,
verbose_name='HTTP content type'
enabled = models.BooleanField(
default=True
)
additional_headers = JSONField(
null=True,
http_method = models.CharField(
max_length=30,
choices=WebhookHttpMethodChoices,
default=WebhookHttpMethodChoices.METHOD_POST,
verbose_name='HTTP method'
)
http_content_type = models.CharField(
max_length=100,
default=HTTP_CONTENT_TYPE_JSON,
verbose_name='HTTP content type',
help_text='The complete list of official content types is available '
'<a href="https://www.iana.org/assignments/media-types/media-types.xhtml">here</a>.'
)
additional_headers = models.TextField(
blank=True,
help_text="User supplied headers which should be added to the request in addition to the HTTP content type. "
"Headers are supplied as key/value pairs in a JSON object."
help_text="User-supplied HTTP headers to be sent with the request in addition to the HTTP content type. "
"Headers should be defined in the format <code>Name: Value</code>. Jinja2 template processing is "
"support with the same context as the request body (below)."
)
body_template = models.TextField(
blank=True,
help_text='Jinja2 template for a custom request body. If blank, a JSON object representing the change will be '
'included. Available context data includes: <code>event</code>, <code>model</code>, '
'<code>timestamp</code>, <code>username</code>, <code>request_id</code>, and <code>data</code>.'
)
secret = models.CharField(
max_length=255,
@ -101,9 +118,6 @@ class Webhook(models.Model):
"the secret as the key. The secret is not transmitted in "
"the request."
)
enabled = models.BooleanField(
default=True
)
ssl_verification = models.BooleanField(
default=True,
verbose_name='SSL verification',
@ -126,9 +140,6 @@ class Webhook(models.Model):
return self.name
def clean(self):
"""
Validate model
"""
if not self.type_create and not self.type_delete and not self.type_update:
raise ValidationError(
"You must select at least one type: create, update, and/or delete."
@ -136,14 +147,30 @@ class Webhook(models.Model):
if not self.ssl_verification and self.ca_file_path:
raise ValidationError({
'ca_file_path': 'Do not specify a CA certificate file if SSL verification is dissabled.'
'ca_file_path': 'Do not specify a CA certificate file if SSL verification is disabled.'
})
# Verify that JSON data is provided as an object
if self.additional_headers and type(self.additional_headers) is not dict:
raise ValidationError({
'additional_headers': 'Header JSON data must be in object form. Example: {"X-API-KEY": "abc123"}'
})
def render_headers(self, context):
"""
Render additional_headers and return a dict of Header: Value pairs.
"""
if not self.additional_headers:
return {}
ret = {}
data = render_jinja2(self.additional_headers, context)
for line in data.splitlines():
header, value = line.split(':')
ret[header.strip()] = value.strip()
return ret
def render_body(self, context):
"""
Render the body template, if defined. Otherwise, jump the context as a JSON object.
"""
if self.body_template:
return render_jinja2(self.body_template, context)
else:
return json.dumps(context, cls=JSONEncoder)
#

View File

@ -63,10 +63,6 @@ class ScriptVariable:
self.field_attrs['widget'] = widget
self.field_attrs['required'] = required
# Initialize the list of optional validators if none have already been defined
if 'validators' not in self.field_attrs:
self.field_attrs['validators'] = []
def as_field(self):
"""
Render the variable as a Django form field.
@ -227,14 +223,12 @@ class IPNetworkVar(ScriptVariable):
An IPv4 or IPv6 prefix.
"""
form_field = IPNetworkFormField
field_attrs = {
'validators': [prefix_validator]
}
def __init__(self, min_prefix_length=None, max_prefix_length=None, *args, **kwargs):
super().__init__(*args, **kwargs)
# Optional minimum/maximum prefix lengths
# Set prefix validator and optional minimum/maximum prefix lengths
self.field_attrs['validators'] = [prefix_validator]
if min_prefix_length is not None:
self.field_attrs['validators'].append(
MinPrefixLengthValidator(min_prefix_length)
@ -292,7 +286,7 @@ class BaseScript:
return vars
def run(self, data):
def run(self, data, commit):
raise NotImplementedError("The script must define a run() method.")
def as_form(self, data=None, files=None, initial=None):
@ -389,10 +383,17 @@ def run_script(script, data, request, commit=True):
# Add the current request as a property of the script
script.request = request
# Determine whether the script accepts a 'commit' argument (this was introduced in v2.7.8)
kwargs = {
'data': data
}
if 'commit' in inspect.signature(script.run).parameters:
kwargs['commit'] = commit
try:
with transaction.atomic():
start_time = time.time()
output = script.run(data)
output = script.run(**kwargs)
end_time = time.time()
if not commit:
raise AbortTransaction()

View File

@ -582,7 +582,7 @@ class ScriptTest(APITestCase):
var2 = IntegerVar()
var3 = BooleanVar()
def run(self, data):
def run(self, data, commit=True):
self.log_info(data['var1'])
self.log_success(data['var2'])

View File

@ -34,7 +34,7 @@ class WebhookTest(APITestCase):
DUMMY_SECRET = "LOOKATMEIMASECRETSTRING"
webhooks = Webhook.objects.bulk_create((
Webhook(name='Site Create Webhook', type_create=True, payload_url=DUMMY_URL, secret=DUMMY_SECRET, additional_headers={'X-Foo': 'Bar'}),
Webhook(name='Site Create Webhook', type_create=True, payload_url=DUMMY_URL, secret=DUMMY_SECRET, additional_headers='X-Foo: Bar'),
Webhook(name='Site Update Webhook', type_update=True, payload_url=DUMMY_URL, secret=DUMMY_SECRET),
Webhook(name='Site Delete Webhook', type_delete=True, payload_url=DUMMY_URL, secret=DUMMY_SECRET),
))

View File

@ -1,4 +1,3 @@
import datetime
import hashlib
import hmac

View File

@ -1,19 +1,21 @@
import json
import logging
import requests
from django_rq import job
from rest_framework.utils.encoders import JSONEncoder
from jinja2.exceptions import TemplateError
from .choices import ObjectChangeActionChoices, WebhookContentTypeChoices
from .choices import ObjectChangeActionChoices
from .webhooks import generate_signature
logger = logging.getLogger('netbox.webhooks_worker')
@job('default')
def process_webhook(webhook, data, model_name, event, timestamp, username, request_id):
"""
Make a POST request to the defined Webhook
"""
payload = {
context = {
'event': dict(ObjectChangeActionChoices)[event].lower(),
'timestamp': timestamp,
'model': model_name,
@ -21,29 +23,48 @@ def process_webhook(webhook, data, model_name, event, timestamp, username, reque
'request_id': request_id,
'data': data
}
# Build the headers for the HTTP request
headers = {
'Content-Type': webhook.http_content_type,
}
if webhook.additional_headers:
headers.update(webhook.additional_headers)
try:
headers.update(webhook.render_headers(context))
except (TemplateError, ValueError) as e:
logger.error("Error parsing HTTP headers for webhook {}: {}".format(webhook, e))
raise e
# Render the request body
try:
body = webhook.render_body(context)
except TemplateError as e:
logger.error("Error rendering request body for webhook {}: {}".format(webhook, e))
raise e
# Prepare the HTTP request
params = {
'method': 'POST',
'method': webhook.http_method,
'url': webhook.payload_url,
'headers': headers
'headers': headers,
'data': body,
}
logger.info(
"Sending {} request to {} ({} {})".format(
params['method'], params['url'], context['model'], context['event']
)
)
logger.debug(params)
try:
prepared_request = requests.Request(**params).prepare()
except requests.exceptions.RequestException as e:
logger.error("Error forming HTTP request: {}".format(e))
raise e
if webhook.http_content_type == WebhookContentTypeChoices.CONTENTTYPE_JSON:
params.update({'data': json.dumps(payload, cls=JSONEncoder)})
elif webhook.http_content_type == WebhookContentTypeChoices.CONTENTTYPE_FORMDATA:
params.update({'data': payload})
prepared_request = requests.Request(**params).prepare()
# If a secret key is defined, sign the request with a hash of the key and its content
if webhook.secret != '':
# Sign the request with a hash of the secret key and its content.
prepared_request.headers['X-Hook-Signature'] = generate_signature(prepared_request.body, webhook.secret)
# Send the request
with requests.Session() as session:
session.verify = webhook.ssl_verification
if webhook.ca_file_path:
@ -51,8 +72,10 @@ def process_webhook(webhook, data, model_name, event, timestamp, username, reque
response = session.send(prepared_request)
if 200 <= response.status_code <= 299:
logger.info("Request succeeded; response status {}".format(response.status_code))
return 'Status {} returned, webhook successfully processed.'.format(response.status_code)
else:
logger.warning("Request failed; response status {}: {}".format(response.status_code, response.content))
raise requests.exceptions.RequestException(
"Status {} returned with content '{}', webhook FAILED to process.".format(
response.status_code, response.content

View File

@ -276,6 +276,7 @@ class PrefixForm(BootstrapMixin, TenancyForm, CustomFieldModelForm):
vrf = DynamicModelChoiceField(
queryset=VRF.objects.all(),
required=False,
label='VRF',
widget=APISelect(
api_url="/api/ipam/vrfs/",
)

View File

@ -385,7 +385,7 @@ class InterfaceIPAddressTable(BaseTable):
"""
List IP addresses assigned to a specific Interface.
"""
address = tables.TemplateColumn(IPADDRESS_ASSIGN_LINK, verbose_name='IP Address')
address = tables.LinkColumn(verbose_name='IP Address')
vrf = tables.TemplateColumn(VRF_LINK, verbose_name='VRF')
status = tables.TemplateColumn(STATUS_LABEL)
tenant = tables.TemplateColumn(template_code=TENANT_LINK)

View File

@ -5,6 +5,7 @@ from ipam.choices import *
from ipam.filters import *
from ipam.models import Aggregate, IPAddress, Prefix, RIR, Role, Service, VLAN, VLANGroup, VRF
from virtualization.models import Cluster, ClusterType, VirtualMachine
from tenancy.models import Tenant, TenantGroup
class VRFTestCase(TestCase):
@ -14,13 +15,27 @@ class VRFTestCase(TestCase):
@classmethod
def setUpTestData(cls):
tenant_groups = (
TenantGroup(name='Tenant group 1', slug='tenant-group-1'),
TenantGroup(name='Tenant group 2', slug='tenant-group-2'),
TenantGroup(name='Tenant group 3', slug='tenant-group-3'),
)
TenantGroup.objects.bulk_create(tenant_groups)
tenants = (
Tenant(name='Tenant 1', slug='tenant-1', group=tenant_groups[0]),
Tenant(name='Tenant 2', slug='tenant-2', group=tenant_groups[1]),
Tenant(name='Tenant 3', slug='tenant-3', group=tenant_groups[2]),
)
Tenant.objects.bulk_create(tenants)
vrfs = (
VRF(name='VRF 1', rd='65000:100', enforce_unique=False),
VRF(name='VRF 2', rd='65000:200', enforce_unique=False),
VRF(name='VRF 3', rd='65000:300', enforce_unique=False),
VRF(name='VRF 4', rd='65000:400', enforce_unique=True),
VRF(name='VRF 5', rd='65000:500', enforce_unique=True),
VRF(name='VRF 6', rd='65000:600', enforce_unique=True),
VRF(name='VRF 1', rd='65000:100', tenant=tenants[0], enforce_unique=False),
VRF(name='VRF 2', rd='65000:200', tenant=tenants[0], enforce_unique=False),
VRF(name='VRF 3', rd='65000:300', tenant=tenants[1], enforce_unique=False),
VRF(name='VRF 4', rd='65000:400', tenant=tenants[1], enforce_unique=True),
VRF(name='VRF 5', rd='65000:500', tenant=tenants[2], enforce_unique=True),
VRF(name='VRF 6', rd='65000:600', tenant=tenants[2], enforce_unique=True),
)
VRF.objects.bulk_create(vrfs)
@ -43,6 +58,20 @@ class VRFTestCase(TestCase):
params = {'id__in': ','.join([str(id) for id in id_list])}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 3)
def test_tenant(self):
tenants = Tenant.objects.all()[:2]
params = {'tenant_id': [tenants[0].pk, tenants[1].pk]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 4)
params = {'tenant': [tenants[0].slug, tenants[1].slug]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 4)
def test_tenant_group(self):
tenant_groups = TenantGroup.objects.all()[:2]
params = {'tenant_group_id': [tenant_groups[0].pk, tenant_groups[1].pk]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 4)
params = {'tenant_group': [tenant_groups[0].slug, tenant_groups[1].slug]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 4)
class RIRTestCase(TestCase):
queryset = RIR.objects.all()
@ -198,15 +227,29 @@ class PrefixTestCase(TestCase):
)
Role.objects.bulk_create(roles)
tenant_groups = (
TenantGroup(name='Tenant group 1', slug='tenant-group-1'),
TenantGroup(name='Tenant group 2', slug='tenant-group-2'),
TenantGroup(name='Tenant group 3', slug='tenant-group-3'),
)
TenantGroup.objects.bulk_create(tenant_groups)
tenants = (
Tenant(name='Tenant 1', slug='tenant-1', group=tenant_groups[0]),
Tenant(name='Tenant 2', slug='tenant-2', group=tenant_groups[1]),
Tenant(name='Tenant 3', slug='tenant-3', group=tenant_groups[2]),
)
Tenant.objects.bulk_create(tenants)
prefixes = (
Prefix(prefix='10.0.0.0/24', site=None, vrf=None, vlan=None, role=None, is_pool=True),
Prefix(prefix='10.0.1.0/24', site=sites[0], vrf=vrfs[0], vlan=vlans[0], role=roles[0]),
Prefix(prefix='10.0.2.0/24', site=sites[1], vrf=vrfs[1], vlan=vlans[1], role=roles[1], status=PrefixStatusChoices.STATUS_DEPRECATED),
Prefix(prefix='10.0.3.0/24', site=sites[2], vrf=vrfs[2], vlan=vlans[2], role=roles[2], status=PrefixStatusChoices.STATUS_RESERVED),
Prefix(prefix='2001:db8::/64', site=None, vrf=None, vlan=None, role=None, is_pool=True),
Prefix(prefix='2001:db8:0:1::/64', site=sites[0], vrf=vrfs[0], vlan=vlans[0], role=roles[0]),
Prefix(prefix='2001:db8:0:2::/64', site=sites[1], vrf=vrfs[1], vlan=vlans[1], role=roles[1], status=PrefixStatusChoices.STATUS_DEPRECATED),
Prefix(prefix='2001:db8:0:3::/64', site=sites[2], vrf=vrfs[2], vlan=vlans[2], role=roles[2], status=PrefixStatusChoices.STATUS_RESERVED),
Prefix(prefix='10.0.0.0/24', tenant=None, site=None, vrf=None, vlan=None, role=None, is_pool=True),
Prefix(prefix='10.0.1.0/24', tenant=tenants[0], site=sites[0], vrf=vrfs[0], vlan=vlans[0], role=roles[0]),
Prefix(prefix='10.0.2.0/24', tenant=tenants[1], site=sites[1], vrf=vrfs[1], vlan=vlans[1], role=roles[1], status=PrefixStatusChoices.STATUS_DEPRECATED),
Prefix(prefix='10.0.3.0/24', tenant=tenants[2], site=sites[2], vrf=vrfs[2], vlan=vlans[2], role=roles[2], status=PrefixStatusChoices.STATUS_RESERVED),
Prefix(prefix='2001:db8::/64', tenant=None, site=None, vrf=None, vlan=None, role=None, is_pool=True),
Prefix(prefix='2001:db8:0:1::/64', tenant=tenants[0], site=sites[0], vrf=vrfs[0], vlan=vlans[0], role=roles[0]),
Prefix(prefix='2001:db8:0:2::/64', tenant=tenants[1], site=sites[1], vrf=vrfs[1], vlan=vlans[1], role=roles[1], status=PrefixStatusChoices.STATUS_DEPRECATED),
Prefix(prefix='2001:db8:0:3::/64', tenant=tenants[2], site=sites[2], vrf=vrfs[2], vlan=vlans[2], role=roles[2], status=PrefixStatusChoices.STATUS_RESERVED),
Prefix(prefix='10.0.0.0/16'),
Prefix(prefix='2001:db8::/32'),
)
@ -285,6 +328,20 @@ class PrefixTestCase(TestCase):
params = {'status': [PrefixStatusChoices.STATUS_DEPRECATED, PrefixStatusChoices.STATUS_RESERVED]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 4)
def test_tenant(self):
tenants = Tenant.objects.all()[:2]
params = {'tenant_id': [tenants[0].pk, tenants[1].pk]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 4)
params = {'tenant': [tenants[0].slug, tenants[1].slug]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 4)
def test_tenant_group(self):
tenant_groups = TenantGroup.objects.all()[:2]
params = {'tenant_group_id': [tenant_groups[0].pk, tenant_groups[1].pk]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 4)
params = {'tenant_group': [tenant_groups[0].slug, tenant_groups[1].slug]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 4)
class IPAddressTestCase(TestCase):
queryset = IPAddress.objects.all()
@ -332,18 +389,31 @@ class IPAddressTestCase(TestCase):
)
Interface.objects.bulk_create(interfaces)
ipaddresses = (
IPAddress(address='10.0.0.1/24', vrf=None, interface=None, status=IPAddressStatusChoices.STATUS_ACTIVE, dns_name='ipaddress-a'),
IPAddress(address='10.0.0.2/24', vrf=vrfs[0], interface=interfaces[0], status=IPAddressStatusChoices.STATUS_ACTIVE, dns_name='ipaddress-b'),
IPAddress(address='10.0.0.3/24', vrf=vrfs[1], interface=interfaces[1], status=IPAddressStatusChoices.STATUS_RESERVED, role=IPAddressRoleChoices.ROLE_VIP, dns_name='ipaddress-c'),
IPAddress(address='10.0.0.4/24', vrf=vrfs[2], interface=interfaces[2], status=IPAddressStatusChoices.STATUS_DEPRECATED, role=IPAddressRoleChoices.ROLE_SECONDARY, dns_name='ipaddress-d'),
IPAddress(address='10.0.0.1/25', vrf=None, interface=None, status=IPAddressStatusChoices.STATUS_ACTIVE),
IPAddress(address='2001:db8::1/64', vrf=None, interface=None, status=IPAddressStatusChoices.STATUS_ACTIVE, dns_name='ipaddress-a'),
IPAddress(address='2001:db8::2/64', vrf=vrfs[0], interface=interfaces[3], status=IPAddressStatusChoices.STATUS_ACTIVE, dns_name='ipaddress-b'),
IPAddress(address='2001:db8::3/64', vrf=vrfs[1], interface=interfaces[4], status=IPAddressStatusChoices.STATUS_RESERVED, role=IPAddressRoleChoices.ROLE_VIP, dns_name='ipaddress-c'),
IPAddress(address='2001:db8::4/64', vrf=vrfs[2], interface=interfaces[5], status=IPAddressStatusChoices.STATUS_DEPRECATED, role=IPAddressRoleChoices.ROLE_SECONDARY, dns_name='ipaddress-d'),
IPAddress(address='2001:db8::1/65', vrf=None, interface=None, status=IPAddressStatusChoices.STATUS_ACTIVE),
tenant_groups = (
TenantGroup(name='Tenant group 1', slug='tenant-group-1'),
TenantGroup(name='Tenant group 2', slug='tenant-group-2'),
TenantGroup(name='Tenant group 3', slug='tenant-group-3'),
)
TenantGroup.objects.bulk_create(tenant_groups)
tenants = (
Tenant(name='Tenant 1', slug='tenant-1', group=tenant_groups[0]),
Tenant(name='Tenant 2', slug='tenant-2', group=tenant_groups[1]),
Tenant(name='Tenant 3', slug='tenant-3', group=tenant_groups[2]),
)
Tenant.objects.bulk_create(tenants)
ipaddresses = (
IPAddress(address='10.0.0.1/24', tenant=None, vrf=None, interface=None, status=IPAddressStatusChoices.STATUS_ACTIVE, dns_name='ipaddress-a'),
IPAddress(address='10.0.0.2/24', tenant=tenants[0], vrf=vrfs[0], interface=interfaces[0], status=IPAddressStatusChoices.STATUS_ACTIVE, dns_name='ipaddress-b'),
IPAddress(address='10.0.0.3/24', tenant=tenants[1], vrf=vrfs[1], interface=interfaces[1], status=IPAddressStatusChoices.STATUS_RESERVED, role=IPAddressRoleChoices.ROLE_VIP, dns_name='ipaddress-c'),
IPAddress(address='10.0.0.4/24', tenant=tenants[2], vrf=vrfs[2], interface=interfaces[2], status=IPAddressStatusChoices.STATUS_DEPRECATED, role=IPAddressRoleChoices.ROLE_SECONDARY, dns_name='ipaddress-d'),
IPAddress(address='10.0.0.1/25', tenant=None, vrf=None, interface=None, status=IPAddressStatusChoices.STATUS_ACTIVE),
IPAddress(address='2001:db8::1/64', tenant=None, vrf=None, interface=None, status=IPAddressStatusChoices.STATUS_ACTIVE, dns_name='ipaddress-a'),
IPAddress(address='2001:db8::2/64', tenant=tenants[0], vrf=vrfs[0], interface=interfaces[3], status=IPAddressStatusChoices.STATUS_ACTIVE, dns_name='ipaddress-b'),
IPAddress(address='2001:db8::3/64', tenant=tenants[1], vrf=vrfs[1], interface=interfaces[4], status=IPAddressStatusChoices.STATUS_RESERVED, role=IPAddressRoleChoices.ROLE_VIP, dns_name='ipaddress-c'),
IPAddress(address='2001:db8::4/64', tenant=tenants[2], vrf=vrfs[2], interface=interfaces[5], status=IPAddressStatusChoices.STATUS_DEPRECATED, role=IPAddressRoleChoices.ROLE_SECONDARY, dns_name='ipaddress-d'),
IPAddress(address='2001:db8::1/65', tenant=None, vrf=None, interface=None, status=IPAddressStatusChoices.STATUS_ACTIVE),
)
IPAddress.objects.bulk_create(ipaddresses)
@ -427,6 +497,20 @@ class IPAddressTestCase(TestCase):
params = {'role': [IPAddressRoleChoices.ROLE_SECONDARY, IPAddressRoleChoices.ROLE_VIP]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 4)
def test_tenant(self):
tenants = Tenant.objects.all()[:2]
params = {'tenant_id': [tenants[0].pk, tenants[1].pk]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 4)
params = {'tenant': [tenants[0].slug, tenants[1].slug]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 4)
def test_tenant_group(self):
tenant_groups = TenantGroup.objects.all()[:2]
params = {'tenant_group_id': [tenant_groups[0].pk, tenant_groups[1].pk]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 4)
params = {'tenant_group': [tenant_groups[0].slug, tenant_groups[1].slug]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 4)
class VLANGroupTestCase(TestCase):
queryset = VLANGroup.objects.all()
@ -524,13 +608,27 @@ class VLANTestCase(TestCase):
)
VLANGroup.objects.bulk_create(groups)
tenant_groups = (
TenantGroup(name='Tenant group 1', slug='tenant-group-1'),
TenantGroup(name='Tenant group 2', slug='tenant-group-2'),
TenantGroup(name='Tenant group 3', slug='tenant-group-3'),
)
TenantGroup.objects.bulk_create(tenant_groups)
tenants = (
Tenant(name='Tenant 1', slug='tenant-1', group=tenant_groups[0]),
Tenant(name='Tenant 2', slug='tenant-2', group=tenant_groups[1]),
Tenant(name='Tenant 3', slug='tenant-3', group=tenant_groups[2]),
)
Tenant.objects.bulk_create(tenants)
vlans = (
VLAN(vid=101, name='VLAN 101', site=sites[0], group=groups[0], role=roles[0], status=VLANStatusChoices.STATUS_ACTIVE),
VLAN(vid=102, name='VLAN 102', site=sites[0], group=groups[0], role=roles[0], status=VLANStatusChoices.STATUS_ACTIVE),
VLAN(vid=201, name='VLAN 201', site=sites[1], group=groups[1], role=roles[1], status=VLANStatusChoices.STATUS_DEPRECATED),
VLAN(vid=202, name='VLAN 202', site=sites[1], group=groups[1], role=roles[1], status=VLANStatusChoices.STATUS_DEPRECATED),
VLAN(vid=301, name='VLAN 301', site=sites[2], group=groups[2], role=roles[2], status=VLANStatusChoices.STATUS_RESERVED),
VLAN(vid=302, name='VLAN 302', site=sites[2], group=groups[2], role=roles[2], status=VLANStatusChoices.STATUS_RESERVED),
VLAN(vid=101, name='VLAN 101', site=sites[0], group=groups[0], role=roles[0], tenant=tenants[0], status=VLANStatusChoices.STATUS_ACTIVE),
VLAN(vid=102, name='VLAN 102', site=sites[0], group=groups[0], role=roles[0], tenant=tenants[0], status=VLANStatusChoices.STATUS_ACTIVE),
VLAN(vid=201, name='VLAN 201', site=sites[1], group=groups[1], role=roles[1], tenant=tenants[1], status=VLANStatusChoices.STATUS_DEPRECATED),
VLAN(vid=202, name='VLAN 202', site=sites[1], group=groups[1], role=roles[1], tenant=tenants[1], status=VLANStatusChoices.STATUS_DEPRECATED),
VLAN(vid=301, name='VLAN 301', site=sites[2], group=groups[2], role=roles[2], tenant=tenants[2], status=VLANStatusChoices.STATUS_RESERVED),
VLAN(vid=302, name='VLAN 302', site=sites[2], group=groups[2], role=roles[2], tenant=tenants[2], status=VLANStatusChoices.STATUS_RESERVED),
)
VLAN.objects.bulk_create(vlans)
@ -579,6 +677,20 @@ class VLANTestCase(TestCase):
params = {'status': [VLANStatusChoices.STATUS_ACTIVE, VLANStatusChoices.STATUS_DEPRECATED]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 4)
def test_tenant(self):
tenants = Tenant.objects.all()[:2]
params = {'tenant_id': [tenants[0].pk, tenants[1].pk]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 4)
params = {'tenant': [tenants[0].slug, tenants[1].slug]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 4)
def test_tenant_group(self):
tenant_groups = TenantGroup.objects.all()[:2]
params = {'tenant_group_id': [tenant_groups[0].pk, tenant_groups[1].pk]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 4)
params = {'tenant_group': [tenant_groups[0].slug, tenant_groups[1].slug]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 4)
class ServiceTestCase(TestCase):
queryset = Service.objects.all()

View File

@ -1,6 +1,6 @@
from collections import OrderedDict
from django.db.models import Count, F, OuterRef, Subquery
from django.db.models import Count, F
from django.shortcuts import render
from django.views.generic import View
from rest_framework.response import Response
@ -8,7 +8,7 @@ from rest_framework.reverse import reverse
from rest_framework.views import APIView
from circuits.filters import CircuitFilterSet, ProviderFilterSet
from circuits.models import Circuit, CircuitTermination, Provider
from circuits.models import Circuit, Provider
from circuits.tables import CircuitTable, ProviderTable
from dcim.filters import (
CableFilterSet, DeviceFilterSet, DeviceTypeFilterSet, PowerFeedFilterSet, RackFilterSet, RackGroupFilterSet, SiteFilterSet,
@ -50,15 +50,7 @@ SEARCH_TYPES = OrderedDict((
'permission': 'circuits.view_circuit',
'queryset': Circuit.objects.prefetch_related(
'type', 'provider', 'tenant', 'terminations__site'
).annotate(
# Annotate A/Z terminations
a_side=Subquery(
CircuitTermination.objects.filter(circuit=OuterRef('pk')).filter(term_side='A').values('site__name')[:1]
),
z_side=Subquery(
CircuitTermination.objects.filter(circuit=OuterRef('pk')).filter(term_side='Z').values('site__name')[:1]
),
),
).annotate_sites(),
'filterset': CircuitFilterSet,
'table': CircuitTable,
'url': 'circuits:circuit_list',

View File

@ -1,14 +1,11 @@
// Toggle the display of device images within an SVG rack elevation
$('button.toggle-images').click(function() {
var selected = $(this).attr('selected');
var rack_front = $("#rack_front");
var rack_rear = $("#rack_rear");
var rack_elevation = $(".rack_elevation");
if (selected) {
$('.device-image', rack_front.contents()).addClass('hidden');
$('.device-image', rack_rear.contents()).addClass('hidden');
$('.device-image', rack_elevation.contents()).addClass('hidden');
} else {
$('.device-image', rack_front.contents()).removeClass('hidden');
$('.device-image', rack_rear.contents()).removeClass('hidden');
$('.device-image', rack_elevation.contents()).removeClass('hidden');
}
$(this).attr('selected', !selected);
$(this).children('span').toggleClass('glyphicon-check glyphicon-unchecked');

View File

@ -603,7 +603,7 @@
<button type="submit" name="_rename" formaction="{% url 'dcim:interface_bulk_rename' %}?return_url={{ device.get_absolute_url }}" class="btn btn-warning btn-xs">
<span class="glyphicon glyphicon-pencil" aria-hidden="true"></span> Rename
</button>
<button type="submit" name="_edit" formaction="{% url 'dcim:interface_bulk_edit' %}?return_url={{ device.get_absolute_url }}" class="btn btn-warning btn-xs">
<button type="submit" name="_edit" formaction="{% url 'dcim:interface_bulk_edit' %}?device={{ device.pk }}&return_url={{ device.get_absolute_url }}" class="btn btn-warning btn-xs">
<span class="glyphicon glyphicon-pencil" aria-hidden="true"></span> Edit
</button>
{% endif %}
@ -666,7 +666,7 @@
<button type="submit" name="_rename" formaction="{% url 'dcim:consoleserverport_bulk_rename' %}?return_url={{ device.get_absolute_url }}" class="btn btn-warning btn-xs">
<span class="glyphicon glyphicon-pencil" aria-hidden="true"></span> Rename
</button>
<button type="submit" name="_edit" formaction="{% url 'dcim:consoleserverport_bulk_edit' %}?return_url={{ device.get_absolute_url }}" class="btn btn-warning btn-xs">
<button type="submit" name="_edit" formaction="{% url 'dcim:consoleserverport_bulk_edit' %}?device={{ device.pk }}&return_url={{ device.get_absolute_url }}" class="btn btn-warning btn-xs">
<span class="glyphicon glyphicon-pencil" aria-hidden="true"></span> Edit
</button>
<button type="submit" name="_disconnect" formaction="{% url 'dcim:consoleserverport_bulk_disconnect' %}?return_url={{ device.get_absolute_url }}" class="btn btn-danger btn-xs">
@ -728,7 +728,7 @@
<button type="submit" name="_rename" formaction="{% url 'dcim:poweroutlet_bulk_rename' %}?return_url={{ device.get_absolute_url }}" class="btn btn-warning btn-xs">
<span class="glyphicon glyphicon-pencil" aria-hidden="true"></span> Rename
</button>
<button type="submit" name="_edit" formaction="{% url 'dcim:poweroutlet_bulk_edit' %}?return_url={{ device.get_absolute_url }}" class="btn btn-warning btn-xs">
<button type="submit" name="_edit" formaction="{% url 'dcim:poweroutlet_bulk_edit' %}?device={{ device.pk }}&return_url={{ device.get_absolute_url }}" class="btn btn-warning btn-xs">
<span class="glyphicon glyphicon-pencil" aria-hidden="true"></span> Edit
</button>
<button type="submit" name="_disconnect" formaction="{% url 'dcim:poweroutlet_bulk_disconnect' %}?return_url={{ device.get_absolute_url }}" class="btn btn-danger btn-xs">
@ -789,7 +789,7 @@
<button type="submit" name="_rename" formaction="{% url 'dcim:frontport_bulk_rename' %}?return_url={{ device.get_absolute_url }}" class="btn btn-warning btn-xs">
<span class="glyphicon glyphicon-pencil" aria-hidden="true"></span> Rename
</button>
<button type="submit" name="_edit" formaction="{% url 'dcim:frontport_bulk_edit' %}?return_url={{ device.get_absolute_url }}" class="btn btn-warning btn-xs">
<button type="submit" name="_edit" formaction="{% url 'dcim:frontport_bulk_edit' %}?device={{ device.pk }}&return_url={{ device.get_absolute_url }}" class="btn btn-warning btn-xs">
<span class="glyphicon glyphicon-pencil" aria-hidden="true"></span> Edit
</button>
<button type="submit" name="_disconnect" formaction="{% url 'dcim:frontport_bulk_disconnect' %}?return_url={{ device.get_absolute_url }}" class="btn btn-danger btn-xs">
@ -847,7 +847,7 @@
<button type="submit" name="_rename" formaction="{% url 'dcim:rearport_bulk_rename' %}?return_url={{ device.get_absolute_url }}" class="btn btn-warning btn-xs">
<span class="glyphicon glyphicon-pencil" aria-hidden="true"></span> Rename
</button>
<button type="submit" name="_edit" formaction="{% url 'dcim:rearport_bulk_edit' %}?return_url={{ device.get_absolute_url }}" class="btn btn-warning btn-xs">
<button type="submit" name="_edit" formaction="{% url 'dcim:rearport_bulk_edit' %}?device={{ device.pk }}&return_url={{ device.get_absolute_url }}" class="btn btn-warning btn-xs">
<span class="glyphicon glyphicon-pencil" aria-hidden="true"></span> Edit
</button>
<button type="submit" name="_disconnect" formaction="{% url 'dcim:rearport_bulk_disconnect' %}?return_url={{ device.get_absolute_url }}" class="btn btn-danger btn-xs">

View File

@ -1,6 +1,6 @@
<object data="{% url 'dcim-api:rack-elevation' pk=rack.pk %}?face={{face}}&render=svg" id="rack_{{ face }}"></object>
<object data="{% url 'dcim-api:rack-elevation' pk=rack.pk %}?face={{face}}&render=svg" class="rack_elevation"></object>
<div class="text-center text-small">
<a href="{% url 'dcim-api:rack-elevation' pk=rack.pk %}?face={{face}}&render=svg" id="rack_{{ face }}">
<a href="{% url 'dcim-api:rack-elevation' pk=rack.pk %}?face={{face}}&render=svg">
<i class="fa fa-download"></i> Save SVG
</a>
</div>

View File

@ -0,0 +1,10 @@
{% load helpers %}
<div class="rack_header">
<strong><a href="{% url 'dcim:rack' pk=rack.pk %}">{{ rack.name }}</a></strong>
{% if rack.role %}
<br /><small class="label" style="color: {{ rack.role.color|fgcolor }}; background-color: #{{ rack.role.color }}">{{ rack.role }}</small>
{% endif %}
{% if rack.facility_id %}
<br /><small class="text-muted">{{ rack.facility_id }}</small>
{% endif %}
</div>

View File

@ -17,16 +17,10 @@
<div style="white-space: nowrap; overflow-x: scroll;">
{% for rack in page %}
<div style="display: inline-block; width: 266px">
<div class="rack_header">
<strong><a href="{% url 'dcim:rack' pk=rack.pk %}">{{ rack.name|truncatechars:"25" }}</a></strong>
<p><small class="text-muted">{{ rack.facility_id|truncatechars:"30" }}</small></p>
</div>
{% include 'dcim/inc/rack_elevation_header.html' %}
{% include 'dcim/inc/rack_elevation.html' with face=rack_face %}
<div class="clearfix"></div>
<div class="rack_header">
<strong><a href="{% url 'dcim:rack' pk=rack.pk %}">{{ rack.name|truncatechars:"25" }}</a></strong>
<p><small class="text-muted">{{ rack.facility_id|truncatechars:"30" }}</small></p>
</div>
{% include 'dcim/inc/rack_elevation_header.html' %}
</div>
{% endfor %}
</div>

View File

@ -17,7 +17,7 @@
</div>
<h1>{% block title %}{{ content_type.model_class|model_name_plural|bettertitle }}{% endblock %}</h1>
<div class="row">
<div class="col-md-9">
<div class="col-md-{% if filter_form %}9{% else %}12{% endif %}">
{% with bulk_edit_url=content_type.model_class|url_name:"bulk_edit" bulk_delete_url=content_type.model_class|url_name:"bulk_delete" %}
{% if permissions.change or permissions.delete %}
<form method="post" class="form form-horizontal">
@ -34,12 +34,12 @@
</div>
<div class="pull-right">
{% if bulk_edit_url and permissions.change %}
<button type="submit" name="_edit" formaction="{% url bulk_edit_url %}{% if bulk_querystring %}?{{ bulk_querystring }}{% elif request.GET %}?{{ request.GET.urlencode }}{% endif %}" class="btn btn-warning btn-sm" disabled="disabled">
<button type="submit" name="_edit" formaction="{% url bulk_edit_url %}{% if request.GET %}?{{ request.GET.urlencode }}{% endif %}" class="btn btn-warning btn-sm" disabled="disabled">
<span class="glyphicon glyphicon-pencil" aria-hidden="true"></span> Edit All
</button>
{% endif %}
{% if bulk_delete_url and permissions.delete %}
<button type="submit" name="_delete" formaction="{% url bulk_delete_url %}{% if bulk_querystring %}?{{ bulk_querystring }}{% elif request.GET %}?{{ request.GET.urlencode }}{% endif %}" class="btn btn-danger btn-sm" disabled="disabled">
<button type="submit" name="_delete" formaction="{% url bulk_delete_url %}{% if request.GET %}?{{ request.GET.urlencode }}{% endif %}" class="btn btn-danger btn-sm" disabled="disabled">
<span class="glyphicon glyphicon-trash" aria-hidden="true"></span> Delete All
</button>
{% endif %}
@ -51,12 +51,12 @@
<div class="pull-left noprint">
{% block bulk_buttons %}{% endblock %}
{% if bulk_edit_url and permissions.change %}
<button type="submit" name="_edit" formaction="{% url bulk_edit_url %}" class="btn btn-warning btn-sm">
<button type="submit" name="_edit" formaction="{% url bulk_edit_url %}{% if request.GET %}?{{ request.GET.urlencode }}{% endif %}" class="btn btn-warning btn-sm">
<span class="glyphicon glyphicon-pencil" aria-hidden="true"></span> Edit Selected
</button>
{% endif %}
{% if bulk_delete_url and permissions.delete %}
<button type="submit" name="_delete" formaction="{% url bulk_delete_url %}" class="btn btn-danger btn-sm">
<button type="submit" name="_delete" formaction="{% url bulk_delete_url %}{% if request.GET %}?{{ request.GET.urlencode }}{% endif %}" class="btn btn-danger btn-sm">
<span class="glyphicon glyphicon-trash" aria-hidden="true"></span> Delete Selected
</button>
{% endif %}
@ -69,11 +69,11 @@
{% include 'inc/paginator.html' with paginator=table.paginator page=table.page %}
<div class="clearfix"></div>
</div>
<div class="col-md-3 noprint">
{% if filter_form %}
{% if filter_form %}
<div class="col-md-3 noprint">
{% include 'inc/search_panel.html' %}
{% endif %}
{% block sidebar %}{% endblock %}
</div>
{% block sidebar %}{% endblock %}
</div>
{% endif %}
</div>
{% endblock %}

View File

@ -6,6 +6,7 @@ from django.contrib.contenttypes.models import ContentType
from django.core.exceptions import FieldError, MultipleObjectsReturned, ObjectDoesNotExist
from django.db.models import ManyToManyField, ProtectedError
from django.http import Http404
from django.urls import reverse
from rest_framework.exceptions import APIException
from rest_framework.permissions import BasePermission
from rest_framework.relations import PrimaryKeyRelatedField, RelatedField
@ -41,6 +42,14 @@ def get_serializer_for_model(model, prefix=''):
)
def is_api_request(request):
"""
Return True of the request is being made via the REST API.
"""
api_path = reverse('api-root')
return request.path_info.startswith(api_path)
#
# Authentication
#

View File

@ -89,6 +89,10 @@ class CustomChoiceFieldInspector(FieldInspector):
value_schema = openapi.Schema(type=schema_type)
value_schema['x-nullable'] = True
if isinstance(choices[0], int):
# Change value_schema for IPAddressFamilyChoices, RackWidthChoices
value_schema = openapi.Schema(type=openapi.TYPE_INTEGER)
schema = SwaggerType(type=openapi.TYPE_OBJECT, required=["label", "value"], properties={
"label": openapi.Schema(type=openapi.TYPE_STRING),
"value": value_schema

View File

@ -2,8 +2,9 @@ import csv
import json
import re
from io import StringIO
import yaml
import django_filters
import yaml
from django import forms
from django.conf import settings
from django.contrib.postgres.forms.jsonb import JSONField as _JSONField, InvalidJSONInput
@ -564,18 +565,17 @@ class TagFilterField(forms.MultipleChoiceField):
class DynamicModelChoiceMixin:
field_modifier = ''
filter = django_filters.ModelChoiceFilter
def get_bound_field(self, form, field_name):
bound_field = BoundField(form, self, field_name)
# Modify the QuerySet of the field before we return it. Limit choices to any data already bound: Options
# will be populated on-demand via the APISelect widget.
field_name = '{}{}'.format(self.to_field_name or 'pk', self.field_modifier)
if bound_field.data:
self.queryset = self.queryset.filter(**{field_name: self.prepare_value(bound_field.data)})
elif bound_field.initial:
self.queryset = self.queryset.filter(**{field_name: self.prepare_value(bound_field.initial)})
data = self.prepare_value(bound_field.data or bound_field.initial)
if data:
filter = self.filter(field_name=self.to_field_name or 'pk', queryset=self.queryset)
self.queryset = filter.filter(self.queryset, data)
else:
self.queryset = self.queryset.none()
@ -594,7 +594,7 @@ class DynamicModelMultipleChoiceField(DynamicModelChoiceMixin, forms.ModelMultip
"""
A multiple-choice version of DynamicModelChoiceField.
"""
field_modifier = '__in'
filter = django_filters.ModelMultipleChoiceFilter
class LaxURLField(forms.URLField):

View File

@ -5,6 +5,7 @@ from django.db import ProgrammingError
from django.http import Http404, HttpResponseRedirect
from django.urls import reverse
from .api import is_api_request
from .views import server_error
@ -38,9 +39,8 @@ class APIVersionMiddleware(object):
self.get_response = get_response
def __call__(self, request):
api_path = reverse('api-root')
response = self.get_response(request)
if request.path_info.startswith(api_path):
if is_api_request(request):
response['API-Version'] = settings.REST_FRAMEWORK_VERSION
return response

View File

@ -4,8 +4,8 @@
<span class="fa fa-upload" aria-hidden="true"></span>
Export <span class="caret"></span>
</button>
<ul class="dropdown-menu">
<li><a href="?{% if url_params %}{{ url_params.urlencode }}&{% endif %}export">CSV (default)</a></li>
<ul class="dropdown-menu dropdown-menu-right">
<li><a href="?{% if url_params %}{{ url_params.urlencode }}&{% endif %}export">Default format</a></li>
<li class="divider"></li>
{% for et in export_templates %}
<li><a href="?{% if url_params %}{{ url_params.urlencode }}&{% endif %}export={{ et.name }}"{% if et.description %} title="{{ et.description }}"{% endif %}>{{ et.name }}</a></li>

View File

@ -172,24 +172,29 @@ class ViewTestCases:
@override_settings(EXEMPT_VIEW_PERMISSIONS=[])
def test_create_object(self):
# Try GET without permission
with disable_warnings('django.request'):
self.assertHttpStatus(self.client.post(self._get_url('add')), 403)
# Try GET with permission
self.add_permissions(
'{}.add_{}'.format(self.model._meta.app_label, self.model._meta.model_name)
)
response = self.client.get(path=self._get_url('add'))
self.assertHttpStatus(response, 200)
# Try POST with permission
initial_count = self.model.objects.count()
request = {
'path': self._get_url('add'),
'data': post_data(self.form_data),
'follow': False, # Do not follow 302 redirects
}
# Attempt to make the request without required permissions
with disable_warnings('django.request'):
self.assertHttpStatus(self.client.post(**request), 403)
# Assign the required permission and submit again
self.add_permissions(
'{}.add_{}'.format(self.model._meta.app_label, self.model._meta.model_name)
)
response = self.client.post(**request)
self.assertHttpStatus(response, 302)
# Validate object creation
self.assertEqual(initial_count + 1, self.model.objects.count())
instance = self.model.objects.order_by('-pk').first()
self.assertInstanceEqual(instance, self.form_data)
@ -204,23 +209,27 @@ class ViewTestCases:
def test_edit_object(self):
instance = self.model.objects.first()
# Try GET without permission
with disable_warnings('django.request'):
self.assertHttpStatus(self.client.post(self._get_url('edit', instance)), 403)
# Try GET with permission
self.add_permissions(
'{}.change_{}'.format(self.model._meta.app_label, self.model._meta.model_name)
)
response = self.client.get(path=self._get_url('edit', instance))
self.assertHttpStatus(response, 200)
# Try POST with permission
request = {
'path': self._get_url('edit', instance),
'data': post_data(self.form_data),
'follow': False, # Do not follow 302 redirects
}
# Attempt to make the request without required permissions
with disable_warnings('django.request'):
self.assertHttpStatus(self.client.post(**request), 403)
# Assign the required permission and submit again
self.add_permissions(
'{}.change_{}'.format(self.model._meta.app_label, self.model._meta.model_name)
)
response = self.client.post(**request)
self.assertHttpStatus(response, 302)
# Validate object modifications
instance = self.model.objects.get(pk=instance.pk)
self.assertInstanceEqual(instance, self.form_data)
@ -232,23 +241,26 @@ class ViewTestCases:
def test_delete_object(self):
instance = self.model.objects.first()
# Try GET without permissions
with disable_warnings('django.request'):
self.assertHttpStatus(self.client.post(self._get_url('delete', instance)), 403)
# Try GET with permission
self.add_permissions(
'{}.delete_{}'.format(self.model._meta.app_label, self.model._meta.model_name)
)
response = self.client.get(path=self._get_url('delete', instance))
self.assertHttpStatus(response, 200)
request = {
'path': self._get_url('delete', instance),
'data': {'confirm': True},
'follow': False, # Do not follow 302 redirects
}
# Attempt to make the request without required permissions
with disable_warnings('django.request'):
self.assertHttpStatus(self.client.post(**request), 403)
# Assign the required permission and submit again
self.add_permissions(
'{}.delete_{}'.format(self.model._meta.app_label, self.model._meta.model_name)
)
response = self.client.post(**request)
self.assertHttpStatus(response, 302)
# Validate object deletion
with self.assertRaises(ObjectDoesNotExist):
self.model.objects.get(pk=instance.pk)
@ -314,6 +326,20 @@ class ViewTestCases:
@override_settings(EXEMPT_VIEW_PERMISSIONS=[])
def test_import_objects(self):
# Test GET without permission
with disable_warnings('django.request'):
self.assertHttpStatus(self.client.get(self._get_url('import')), 403)
# Test GET with permission
self.add_permissions(
'{}.view_{}'.format(self.model._meta.app_label, self.model._meta.model_name),
'{}.add_{}'.format(self.model._meta.app_label, self.model._meta.model_name)
)
response = self.client.get(self._get_url('import'))
self.assertHttpStatus(response, 200)
# Test POST with permission
initial_count = self.model.objects.count()
request = {
'path': self._get_url('import'),
@ -321,19 +347,10 @@ class ViewTestCases:
'csv': '\n'.join(self.csv_data)
}
}
# Attempt to make the request without required permissions
with disable_warnings('django.request'):
self.assertHttpStatus(self.client.post(**request), 403)
# Assign the required permission and submit again
self.add_permissions(
'{}.view_{}'.format(self.model._meta.app_label, self.model._meta.model_name),
'{}.add_{}'.format(self.model._meta.app_label, self.model._meta.model_name)
)
response = self.client.post(**request)
self.assertHttpStatus(response, 200)
# Validate import of new objects
self.assertEqual(self.model.objects.count(), initial_count + len(self.csv_data) - 1)
class BulkEditObjectsViewTestCase(ModelViewTestCase):

View File

@ -31,8 +31,9 @@ def csv_format(data):
if not isinstance(value, str):
value = '{}'.format(value)
# Double-quote the value if it contains a comma
# Double-quote the value if it contains a comma or line break
if ',' in value or '\n' in value:
value = value.replace('"', '""') # Escape double-quotes
csv.append('"{}"'.format(value))
else:
csv.append('{}'.format(value))
@ -80,10 +81,12 @@ def get_subquery(model, field):
return subquery
def serialize_object(obj, extra=None):
def serialize_object(obj, extra=None, exclude=None):
"""
Return a generic JSON representation of an object using Django's built-in serializer. (This is used for things like
change logging, not the REST API.) Optionally include a dictionary to supplement the object data.
change logging, not the REST API.) Optionally include a dictionary to supplement the object data. A list of keys
can be provided to exclude them from the returned dictionary. Private fields (prefaced with an underscore) are
implicitly excluded.
"""
json_str = serialize('json', [obj])
data = json.loads(json_str)[0]['fields']
@ -102,6 +105,16 @@ def serialize_object(obj, extra=None):
if extra is not None:
data.update(extra)
# Copy keys to list to avoid 'dictionary changed size during iteration' exception
for key in list(data):
# Private fields shouldn't be logged in the object change
if isinstance(key, str) and key.startswith('_'):
data.pop(key)
# Explicitly excluded keys
if isinstance(exclude, (list, tuple)) and key in exclude:
data.pop(key)
return data
@ -212,18 +225,6 @@ def prepare_cloned_fields(instance):
return param_string
def querydict_to_dict(querydict):
"""
Convert a django.http.QueryDict object to a regular Python dictionary, preserving lists of multiple values.
(QueryDict.dict() will return only the last value in a list for each key.)
"""
assert isinstance(querydict, QueryDict)
return {
key: querydict.get(key) if len(value) == 1 and key != 'pk' else querydict.getlist(key)
for key, value in querydict.lists()
}
def shallow_compare_dict(source_dict, destination_dict, exclude=None):
"""
Return a new dictionary of the different keys. The values of `destination_dict` are returned. Only the equality of

View File

@ -1,6 +1,6 @@
import re
from django.core.validators import _lazy_re_compile, URLValidator
from django.core.validators import _lazy_re_compile, BaseValidator, URLValidator
class EnhancedURLValidator(URLValidator):
@ -26,3 +26,13 @@ class EnhancedURLValidator(URLValidator):
r'(?:[/?#][^\s]*)?' # Path
r'\Z', re.IGNORECASE)
schemes = AnyURLScheme()
class ExclusionValidator(BaseValidator):
"""
Ensure that a field's value is not equal to any of the specified values.
"""
message = 'This value may not be %(show_value)s.'
def compare(self, a, b):
return a in b

View File

@ -25,7 +25,7 @@ from extras.models import CustomField, CustomFieldValue, ExportTemplate
from extras.querysets import CustomFieldQueryset
from utilities.exceptions import AbortTransaction
from utilities.forms import BootstrapMixin, CSVDataField
from utilities.utils import csv_format, prepare_cloned_fields, querydict_to_dict
from utilities.utils import csv_format, prepare_cloned_fields
from .error_handlers import handle_protectederror
from .forms import ConfirmationForm, ImportForm
from .paginator import EnhancedPaginator
@ -626,12 +626,13 @@ class BulkEditView(GetReturnURLMixin, View):
model = self.queryset.model
# Create a mutable copy of the POST data
post_data = request.POST.copy()
# If we are editing *all* objects in the queryset, replace the PK list with all matched objects.
if post_data.get('_all') and self.filterset is not None:
post_data['pk'] = [obj.pk for obj in self.filterset(request.GET, model.objects.only('pk')).qs]
if request.POST.get('_all') and self.filterset is not None:
pk_list = [
obj.pk for obj in self.filterset(request.GET, model.objects.only('pk')).qs
]
else:
pk_list = request.POST.getlist('pk')
if '_apply' in request.POST:
form = self.form(model, request.POST)
@ -715,12 +716,19 @@ class BulkEditView(GetReturnURLMixin, View):
messages.error(self.request, "{} failed validation: {}".format(obj, e))
else:
# Pass the PK list as initial data to avoid binding the form
initial_data = querydict_to_dict(post_data)
# Include the PK list as initial data for the form
initial_data = {'pk': pk_list}
# Check for other contextual data needed for the form. We avoid passing all of request.GET because the
# filter values will conflict with the bulk edit form fields.
# TODO: Find a better way to accomplish this
if 'device' in request.GET:
initial_data['device'] = request.GET.get('device')
form = self.form(model, initial=initial_data)
# Retrieve objects being edited
table = self.table(self.queryset.filter(pk__in=post_data.getlist('pk')), orderable=False)
table = self.table(self.queryset.filter(pk__in=pk_list), orderable=False)
if not table.rows:
messages.warning(request, "No {} were selected.".format(model._meta.verbose_name_plural))
return redirect(self.get_return_url(request))

View File

@ -34,7 +34,7 @@ class ClusterGroupFilterSet(NameSlugSearchFilterSet):
fields = ['id', 'name', 'slug']
class ClusterFilterSet(CustomFieldFilterSet, CreatedUpdatedFilterSet):
class ClusterFilterSet(TenancyFilterSet, CustomFieldFilterSet, CreatedUpdatedFilterSet):
id__in = NumericInFilter(
field_name='id',
lookup_expr='in'
@ -84,10 +84,6 @@ class ClusterFilterSet(CustomFieldFilterSet, CreatedUpdatedFilterSet):
to_field_name='slug',
label='Cluster type (slug)',
)
tenant = django_filters.ModelMultipleChoiceFilter(
queryset=Tenant.objects.all(),
label="Tenant (ID)"
)
tag = TagFilter()
class Meta:

View File

@ -1,6 +1,7 @@
from django.test import TestCase
from dcim.models import DeviceRole, Interface, Platform, Region, Site
from tenancy.models import Tenant, TenantGroup
from virtualization.choices import *
from virtualization.filters import *
from virtualization.models import Cluster, ClusterGroup, ClusterType, VirtualMachine
@ -99,10 +100,24 @@ class ClusterTestCase(TestCase):
)
Site.objects.bulk_create(sites)
tenant_groups = (
TenantGroup(name='Tenant group 1', slug='tenant-group-1'),
TenantGroup(name='Tenant group 2', slug='tenant-group-2'),
TenantGroup(name='Tenant group 3', slug='tenant-group-3'),
)
TenantGroup.objects.bulk_create(tenant_groups)
tenants = (
Tenant(name='Tenant 1', slug='tenant-1', group=tenant_groups[0]),
Tenant(name='Tenant 2', slug='tenant-2', group=tenant_groups[1]),
Tenant(name='Tenant 3', slug='tenant-3', group=tenant_groups[2]),
)
Tenant.objects.bulk_create(tenants)
clusters = (
Cluster(name='Cluster 1', type=cluster_types[0], group=cluster_groups[0], site=sites[0]),
Cluster(name='Cluster 2', type=cluster_types[1], group=cluster_groups[1], site=sites[1]),
Cluster(name='Cluster 3', type=cluster_types[2], group=cluster_groups[2], site=sites[2]),
Cluster(name='Cluster 1', type=cluster_types[0], group=cluster_groups[0], site=sites[0], tenant=tenants[0]),
Cluster(name='Cluster 2', type=cluster_types[1], group=cluster_groups[1], site=sites[1], tenant=tenants[1]),
Cluster(name='Cluster 3', type=cluster_types[2], group=cluster_groups[2], site=sites[2], tenant=tenants[2]),
)
Cluster.objects.bulk_create(clusters)
@ -143,6 +158,20 @@ class ClusterTestCase(TestCase):
params = {'type': [types[0].slug, types[1].slug]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
def test_tenant(self):
tenants = Tenant.objects.all()[:2]
params = {'tenant_id': [tenants[0].pk, tenants[1].pk]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
params = {'tenant': [tenants[0].slug, tenants[1].slug]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
def test_tenant_group(self):
tenant_groups = TenantGroup.objects.all()[:2]
params = {'tenant_group_id': [tenant_groups[0].pk, tenant_groups[1].pk]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
params = {'tenant_group': [tenant_groups[0].slug, tenant_groups[1].slug]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
class VirtualMachineTestCase(TestCase):
queryset = VirtualMachine.objects.all()
@ -202,10 +231,24 @@ class VirtualMachineTestCase(TestCase):
)
DeviceRole.objects.bulk_create(roles)
tenant_groups = (
TenantGroup(name='Tenant group 1', slug='tenant-group-1'),
TenantGroup(name='Tenant group 2', slug='tenant-group-2'),
TenantGroup(name='Tenant group 3', slug='tenant-group-3'),
)
TenantGroup.objects.bulk_create(tenant_groups)
tenants = (
Tenant(name='Tenant 1', slug='tenant-1', group=tenant_groups[0]),
Tenant(name='Tenant 2', slug='tenant-2', group=tenant_groups[1]),
Tenant(name='Tenant 3', slug='tenant-3', group=tenant_groups[2]),
)
Tenant.objects.bulk_create(tenants)
vms = (
VirtualMachine(name='Virtual Machine 1', cluster=clusters[0], platform=platforms[0], role=roles[0], status=VirtualMachineStatusChoices.STATUS_ACTIVE, vcpus=1, memory=1, disk=1, local_context_data={"foo": 123}),
VirtualMachine(name='Virtual Machine 2', cluster=clusters[1], platform=platforms[1], role=roles[1], status=VirtualMachineStatusChoices.STATUS_STAGED, vcpus=2, memory=2, disk=2),
VirtualMachine(name='Virtual Machine 3', cluster=clusters[2], platform=platforms[2], role=roles[2], status=VirtualMachineStatusChoices.STATUS_OFFLINE, vcpus=3, memory=3, disk=3),
VirtualMachine(name='Virtual Machine 1', cluster=clusters[0], platform=platforms[0], role=roles[0], tenant=tenants[0], status=VirtualMachineStatusChoices.STATUS_ACTIVE, vcpus=1, memory=1, disk=1, local_context_data={"foo": 123}),
VirtualMachine(name='Virtual Machine 2', cluster=clusters[1], platform=platforms[1], role=roles[1], tenant=tenants[1], status=VirtualMachineStatusChoices.STATUS_STAGED, vcpus=2, memory=2, disk=2),
VirtualMachine(name='Virtual Machine 3', cluster=clusters[2], platform=platforms[2], role=roles[2], tenant=tenants[2], status=VirtualMachineStatusChoices.STATUS_OFFLINE, vcpus=3, memory=3, disk=3),
)
VirtualMachine.objects.bulk_create(vms)
@ -306,6 +349,20 @@ class VirtualMachineTestCase(TestCase):
params = {'local_context_data': 'false'}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
def test_tenant(self):
tenants = Tenant.objects.all()[:2]
params = {'tenant_id': [tenants[0].pk, tenants[1].pk]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
params = {'tenant': [tenants[0].slug, tenants[1].slug]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
def test_tenant_group(self):
tenant_groups = TenantGroup.objects.all()[:2]
params = {'tenant_group_id': [tenant_groups[0].pk, tenant_groups[1].pk]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
params = {'tenant_group': [tenant_groups[0].slug, tenant_groups[1].slug]}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
class InterfaceTestCase(TestCase):
queryset = Interface.objects.all()

View File

@ -13,6 +13,7 @@ django-taggit-serializer==0.1.7
django-timezone-field==4.0
djangorestframework==3.10.3
drf-yasg[validation]==1.17.0
gunicorn==20.0.4
Jinja2==2.10.3
Markdown==2.6.11
netaddr==0.7.19
@ -23,3 +24,4 @@ pycryptodome==3.9.4
PyYAML==5.3
redis==3.3.11
svgwrite==1.3.1
wheel==0.34.2

View File

@ -1,52 +1,67 @@
#!/bin/bash
# This script will prepare NetBox to run after the code has been upgraded to
# its most recent release.
#
# Once the script completes, remember to restart the WSGI service (e.g.
# gunicorn or uWSGI).
cd "$(dirname "$0")"
VIRTUALENV="$(pwd -P)/venv"
PYTHON="python3"
PIP="pip3"
# Remove the existing virtual environment (if any)
if [ -d "$VIRTUALENV" ]; then
COMMAND="rm -rf ${VIRTUALENV}"
echo "Removing old virtual environment..."
eval $COMMAND
else
WARN_MISSING_VENV=1
fi
# Uninstall any Python packages which are no longer needed
COMMAND="${PIP} uninstall -r old_requirements.txt -y"
echo "Removing old Python packages ($COMMAND)..."
eval $COMMAND
# Install any new Python packages
COMMAND="${PIP} install -r requirements.txt --upgrade"
echo "Updating required Python packages ($COMMAND)..."
eval $COMMAND
# Validate Python dependencies
COMMAND="${PIP} check"
echo "Validating Python dependencies ($COMMAND)..."
eval $COMMAND || (
echo "******** PLEASE FIX THE DEPENDENCIES BEFORE CONTINUING ********"
echo "* Manually install newer version(s) of the highlighted packages"
echo "* so that 'pip3 check' passes. For more information see:"
echo "* https://github.com/pypa/pip/issues/988"
# Create a new virtual environment
COMMAND="/usr/bin/python3 -m venv ${VIRTUALENV}"
echo "Creating a new virtual environment at ${VIRTUALENV}..."
eval $COMMAND || {
echo "--------------------------------------------------------------------"
echo "ERROR: Failed to create the virtual environment. Check that you have"
echo "the required system packages installed."
echo "--------------------------------------------------------------------"
exit 1
)
}
# Activate the virtual environment
source "${VIRTUALENV}/bin/activate"
# Install Python packages
COMMAND="pip3 install -r requirements.txt"
echo "Installing Python packages ($COMMAND)..."
eval $COMMAND
# Apply any database migrations
COMMAND="${PYTHON} netbox/manage.py migrate"
COMMAND="python3 netbox/manage.py migrate"
echo "Applying database migrations ($COMMAND)..."
eval $COMMAND
# Delete any stale content types
COMMAND="${PYTHON} netbox/manage.py remove_stale_contenttypes --no-input"
echo "Removing stale content types ($COMMAND)..."
eval $COMMAND
# Collect static files
COMMAND="${PYTHON} netbox/manage.py collectstatic --no-input"
COMMAND="python3 netbox/manage.py collectstatic --no-input"
echo "Collecting static files ($COMMAND)..."
eval $COMMAND
# Delete any stale content types
COMMAND="python3 netbox/manage.py remove_stale_contenttypes --no-input"
echo "Removing stale content types ($COMMAND)..."
eval $COMMAND
# Clear all cached data
COMMAND="${PYTHON} netbox/manage.py invalidate all"
COMMAND="python3 netbox/manage.py invalidate all"
echo "Clearing cache data ($COMMAND)..."
eval $COMMAND
if [ WARN_MISSING_VENV ]; then
echo "--------------------------------------------------------------------"
echo "WARNING: No existing virtual environment was detected. A new one has"
echo "been created. Update your systemd service files to reflect the new"
echo "executables."
echo " Python: ${VIRTUALENV}/bin/python"
echo " gunicorn: ${VIRTUALENV}/bin/gunicorn"
echo "--------------------------------------------------------------------"
fi
echo "Upgrade complete! Don't forget to restart the NetBox services:"
echo " sudo systemctl restart netbox netbox-rq"